DevOps in the team

We have reached the end of this DevOps assignment, to be honest it was quite interesting, some tasks were pretty easy, but others taught me a lot of new things. For instance, I already had a working Virtual Machine with Linux, 2FA in github and a bit of knowledge with apache. New things like cron jobs, setting up a LAMP stack and developing my own continuous integration system was really interesting.

There are existing frameworks and services that can replace most of this work, some of this are:

Jenkins: An open source CI/CD server that allows the automation of the different stages of a delivery pipeline.

Pros:

  • It is open source and free
  • Has a wide range of plugins
  • Integrates and works with all major tools
  • Has APIs that lets you tailor the amount of data that you fetch.

Cons:

  • Unpredictable costs derived from the server
  • Lack of governance
  • Lack of analytics

[Source]

Bamboo: Atlassian’s CI/CD server solution that has many similar features to Jenkins.

Pros:

  • Integrates with other Atlassian tools
  • Great notification scheme
  • Easy enterprise-grade administration

Cons:

  • Doesn’t support forks
  • Very limited basic license

[Source1, Source2]

TeamCity: a Java-based build management and continuous integration server from JetBrains.

Pros:

  • Easy installation
  • Cross-platform build support
  • Supports build chains

Cons:

  • Expensive
  • Inter-branch merges trigger emails to unrelated committers
  • Plugins don’t get updated often

[Source]
Continuous integration and automation of tests are some of the most important excise tasks needed to survive in this fast paced environment of developing, changes need to be built as fast as possible and errors should be detected and corrected as soon as possible, so having this is definitely a life saver, it is important to value this tasks and the people that perform them for the team.

star

The U in CommUnication

It was recommended for us to read Blogging and me by Ana and to be honest it was the most pleasant assignment I have had so far.

The way Ana expresses her experiences was really entertaining and I liked the fact that she used the Wayback Machine to explore her own digital print. I tried to search her blog ohhelloana.blog on the Wayback machine to see what she was talking about regarding her old posts but the oldest snapshot I found was from 2017, so I am guessing she was using another URL before, even so the writting from 2 years ago has a slightly different style but the humor and character is still there, which I find quite neat.

There are a lot of things that she mentions that I hadn’t discovered yet, like Dynamic Drive, which led me into investigation mode and I learned a lot of new stuff.

I am sorry to hear about the troubles she had upon entering her first tech job, and some of the things she thought at that time related to her work are the same fears I have about my future, but as many other life experiences have taught me, the conclusion seems to be the same, as long as you keep moving, at any pace and direction you need at the moment, you would find yourself in a better place eventually.

The fact that she shared the thoughts of other people also eased me a bit, and I think it is true, as long as you are giving information to the world, someone would find it useful sooner or later and we should work with our fears to keep delivering better and better stuff each time, instead of not doing anything at all.

Her blog post also made me realize that I too miss the useless web, I remember I used to go on to https://theuselessweb.com/ and just spend time having fun with it. I would recommend you give it a shot, it will be worth it, I promise.

The most important thing that this blog post made me realize is that I would like to keep close to me the things that make me happy, as simple as that sounds, I know it can be difficult to keep that in mind when life gets too busy or messy.

Pytest and Github Status Page

This is the 4th part of a 5 stage assignment on DevOps, this time we are going to make sure that we can use pytest via the command line and setup a status page that shows the status of the build.

So first things first, we need to have pytest installed, you can use the command: sudo apt install python-pytest

I followed the simple example found here, and this is how the output looks like for a failing test and a passing test:

test1

test2

Now to setup the status page, I used a very basic html based on the apache one that looks like this:

pagepage2

Each time the cron job pulls from the repository, other jobs run the unit test as shown before and update the <p> tag of id “build” and the <div> tag of id “about” with the results of the build with javascript. For this last part I used Rhino which can be downloaded in ubuntu using the command: sudo apt-get install rhino, but other methods can be used to achieve this.

Now how could you update the README page on your repository to reflect the build status using this setup? Maybe at the same time that you are editing the html I could also call a job that edits the README using badges that I can save as images hosted on free services like ImgBB and change them at the beginning of the README and then commit and push those changes.

Smalltalk Testing, Old Nintendo and Testing with Python.

Kent Beck talks about testing using smalltalk with patterns and a framework for it on Simple Smalltalk Tesign: With Patterns.

You can find annotations on it using Hypothes.is, here is mine, for example:

annotation

This is on the WayBackMachine site, which is a digital library of Internet sites and other cultural artifacts in digital form, which include different versions of them throughout time. I searched for the nintendo web page and found that the oldest version of it on the site was from december 22th, 1996 and looks like this:

nintendo-old.PNG

The difference is outstanding.

Now changing the topic back to testing, following the course of Unit Testing and Test Driven Development in Python from LinkedIn Learning, which has a really complete introduction to TDD and Unit Testing including the definition of terms used in this kind of testing, such as test discovery and test fixtures, accompanied by basic examples of how to use pytest to perform within PyCharm and other tools like Eclipse PyDev, what I liked the most is that it also gives insight to other related concepts, such as Uncle bob’s 3 laws of TDD and  python virtual environments.

I used PyCharm to perform the tasks, first I had to make sure that the default test runner is pytest, you can do it from settings:

config

And when you run the tests you should see something like this:

pytest.PNG

Github, SSH and keys

This is the third part of a 5 stage activity about DevOps.

This time we need to have a GitHub account. I created a separate repository to test this DevOps lab. We will need to have two-factor authentication enabled, I already had that part done, but here you can find a simple guide to do so if you need it.

To set up the SSH keys, using them to connect to github and testing if it worked out, I used the following guides:

ssh

Then I  did a git clone of my new repository to the server. Remember to use the ssh link, which in my case would be git@github.com:CarminaP/VMtesting.git

git clone

I added an simple html file with some changes, then updated the server with git pull origin master.

pull

To automate the updates I used cron jobs, like what we saw on last post, this guide really helped me out.

Linux Setup

This is the second part of a 5 stage activity about DevOps.

First, we need to have a Linux distribution, I decided to use a Virtual Machine for this and here is a really cool guide to install Ubuntu on a virtual machine, using Virtual Box.

I used the Ubuntu 18.04.2 LTS Desktop version, which can be found here.

Then we have to install the support for a development environment.

For this I will start with python since Ubuntu comes with a command line version of it pre-installed.

Installing git is as easy as:
sudo apt update
sudo apt install git
but if you want more information about installing it from source or setting it up you can check out this guide.

To setup for web deployment. I chose the classic LAMP stack with Apache, MySQL and PHP. Here is a complete guide that helped me with this.

Lastly to get started with Cron, I made a task that runs the command /home/user/Desktop/example.sh, which writes on console “Hello World” at 12:00 a.m. every day. This tutorial helped me out.

cron

And that is all for this week, next week we will look at how to setup GitHub and connect it to the server.

DevOps

DevOps is a combination of software development (Dev) and information technology operations (Ops). It is a fairly new term that has emerged from the collision of two major related trends:

  • The “agile infrastructure” or “agile operations”, created while applying Agile and Lean approaches to operations work.
  • The collaboration between development and operations staff throughout all stages of the development lifecycle.

In a traditional software development life-cycle process, it takes weeks for the Dev team’s work to be placed into production.

When the Dev team’s code is finally deployed into the Production Environment, occasionally unexpected errors or problems occur, because the Dev team is focused on writing code for its Development Environment, which is not identical to the Production Environment.

The Operations team are responsible for maintaining the uptime of the Production Environment. The increase of servers within the company becomes a challenge that affects how new code is deployed, which is why the Ops team usually require code deployments to be scheduled and are allowed only once a month in most cases. Once the code is deployed into production Environment, the Ops team is responsible for errors diagnoses or problems caused by the changes.

Therefore, adopting DevOps philosophy require a new mindset, tools, and skills.

DevOps can be implemented in 3 phases:

1. Automated Testing

This is the foundation of DevOps competency. This involves writing tests within the code so that every change in the code can be evaluated..

2.  Continuous Integration

Once we have an effective code coverage for testing, The entire testing process is then automated. The concept is based on testing the code and running it through every iteration possible in order to find out if it can create any bug in an automated way.

Jenkins is one of the most popular tools used to implement continuous integration.

3. Continuous Delivery

It consists of writing code in small chunks that are integrated, tested, monitored and deployed. The continuous delivery pipeline and tools are different for each organization. The idea of a pipeline is a series of phases each backed by a specific tool. It usually has six key phases:

  • Plan & write code: the Dev team plan and write code using Code Configuration Management tool such as Git.
  • Build & Test: while writing the code we can build and test with a tool like Jenkins.
  • Release & Deploy: they are tools ( such as Puppet and Chef) that help automate the process of delivering that code to a cloud environment ( such as Amazon web services, Heroku, etc.) or a server.

Serverless architecture is a new computing approach in creating systems into the cloud or third parties services instead of depending on servers. Docker is an example of serverless architecture.

Some benefits of DevOps include:

  • Increase the rate of software delivery
  • Faster time to market
  • Better business focus by automating the infrastructure
  • Higher software quality and efficiency
  • Fewer Bugs
  • Lower delivery cost

Recursos:

https://resources.collab.net/devops-101/what-is-devops

https://theagileadmin.com/what-is-devops/

What is DevOps? “In Simple English”

A certain bug: “We’re a lot stronger than you say we are”

After reading The Secret Life of Bugs: Going Past the Errors and Omissions in Software Repositories by Jorge Aranda and Gina Venolia, I realized that software bugs are more elusive than I had accounted for.

The goal of the study was to provide an account of coordination involved in bug fixing tasks, but the second research question that they tried to answer in their research is what had me more captivated: Do electronic traces of interaction provide a good enough picture of coordination, or is nonpersistent knowledge necessary to understand the story of each bug fix?

The short answer is no, non persistent knowledge is still necessary, and that is reflected in all thier work, from their limitations, to the case study and survey results.  As they put it: Electronic repositories hold incomplete or incorrect data more often than not.

This shows that even the simplest of bugs involve social, organizational and technical knowledge that cannot be stored nor analyzed in an efficient, automated way. And that is very interesting given that as a software developer, one will spend more time maintaining and modifying other people’s code than writing their own from scratch. The current technology and coding culture still has long steps to go until we know how to better deal with bugs, while also documenting them well enough as to avoid future similar occurences.

This study is interesting if you want to learn about the limitations that surround computer development studies, how elusive bugs are, ways to deal with the complexity of tracking the history of the bugs and a set of goals that provide a framework to analyze the effectiveness of coordination.

If you have any interest in those fields then I absolutely recommend you to give this study a read.

Testing with Mocha

Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser. It runs independently from the assertion library, so you can choose whichever works for you. It allows you to choose other interfaces for defining test suites besides TDD, like BDD, Exports, QUnit and Require. You can also integrate generators into your test suite or custom the colors of the test reporters.

Some downsides are that tests cannot be ran in random order and that Mocha requires developers to select and set up assertion libraries and mocking utilities, which can be intimidating for beginners.

mocha

You can install Mocha using the Node Package Manager (NPM).

With global installation

npm i --global mocha

This makes the mocha CLI binary available for use in your command-line terminal so you can run tests using: mocha

You can also install it as a development dependency for your project

npm i --save-dev mocha

If you this you can access the mocha binary from the node_modulesdirectory of your project as follows: ./node_modules/mocha/bin/mochamocha

Mocha automatically looks for tests inside the test directory of your project. So you should go ahead and create a directory with that name in your project root.

Writing tests often requires using an assertion library. If you are using Mocha in a Node.js environment, you can use the built-in assert module as your assertion library. However, there are other options such as Chai, Expect.js, Should.js.

Mocha also provides a variety of style-interfaces for defining test suites: BDD, TDD, Exports, QUnit and Require. If you want to know more about them you can check out Mocha’s documentation.

Here is a simple example in which the Chai assertion library and BDD interface are used:

https://github.com/CarminaP/MochaTesting

While on the project root you can run the test script using mocha. The output should looke like this:

test

Resources:

https://www.slant.co/options/12696/~mocha-review

https://blog.logrocket.com/a-quick-and-complete-guide-to-mocha-testing-d0e0ea09f09d

A wild but awesome idea

In the podcast of test && commit || revert with Kent Beck and Scott Hanselman, they talked about this alternative workflow, which focuses on avoiding the investment of time on a false change that can later lead to cause fallacy. The idea is that everytime a test passes, you make a commit, but if there is a problem you revert to the last known green state.

What I found interesting is that they agree that is a wild idea, but because it is cheap to experiment with and works out, it is also an awesome idea. Why? because innovation and growth only come from new experiences and it isn’t likely for someone to compete with you for it.

wild

My only experience with TDD was when I saw it as a topic in a class and practiced it with a small assignment. My interaction with TDD may have been small, but with that I learned a lot of things from it and I understood the sense of security it gives you, from the beginning until the end. The work you do that seems pointless at first, results in a lifesaver later on.

The other thing that I found interesting is that even if TDD and TCR (test && commit || revert) are based on the results of tests to move one, each one has a different incentive. While TDD assures your that your code works in each addition, if something fails it is easier to pinpoint the failure, while the same can be said for TCR, it mainly incentivates the programmer to make smaller changes in a stable way, the tought goes to the sequence of the changes rather than the test themselves. There is a greater expense to every step you take but there is a lower probability that you would do a bunch of work and have to throw it away.

step-step-one

You may be taking tiny steps, but you are making progress towards a state while continuing to deliver value. And this philosophy can also be carried to other programming styles, for example, when you make a change and all tests go red, as Kent sees it, it is most likely a design problem, so ask yourself why this is happening, what design would solve the problem and finally, how can I get from where I am now to that new design in small safe steps that allow to give functionality along the way? Which is better that making big messes even bigger.

And the final lesson that I got from the podcast is that learning involves emotional engaging and that everybody is responsible for their own learning, which is something that seems obvious, but is lost in the current educational system, which focuses more on the responsibility of the teacher to that learning. Each one of us has different learning styles, and different topics of interest. No one learns the same way as another person, so it is our tasks to find how to learn. In this sense I acknowledge that there is a lot for me to learn from testing, there are other workflows to try and a bunch of testing frameworks to discover.

aladdin