Extreme programming practices that mitigate Technical Debt

Technical debt, first coined by Ward Cunningham(TD) [1, 2], can be described as the dichotomy between the best design that takes longer to implement and a quick design and implementation for short-term gain. The technical compromise between these two designs and extra development effort needed to make changes to the latter software system represents debt.

1.   What is Technical Debt?

Technical debt, first coined by Ward Cunningham(TD) [1, 2], can be described as the dichotomy between the best design that takes longer to implement and a quick design and implementation for short-term gain. The technical compromise between these two designs and extra development effort needed to make changes to the latter software system represents debt.

A popular metaphor is that of monetary debt. If TD is not repaid as soon as possible, “interest” is incurred — this relates to the effort needed to implement changes in the future. When TD is not addressed and systems are patched with functional and bug fixes, it leads to an increase of software entropy [3, 4], a condition that deteriorates over time. In some situations, when creating a proof-of-concept (POC), it is acceptable to incur some degree of TD. This, however, is known at design time and should be addressed when the POC evolves into an official project.

Another symptom of TD can be compared to the law of unintended consequences. More often than not, poorly designed systems have longer release cycles and have higher resistance to change. This is due to the fact that when necessary code changes are made to one part of the system, unintended changes occur and have to be addressed. This increases the time of the delivery cycle and drives the cost of change higher, often leading to more pressure, which in turn creates more TD.

Some causes of TD:

  1. Last minute emergency additions or omissions of functionality;
  2. Incompetent technical leadership and bad systems architecture;
  3. Efforts to minimize startup capital;
  4. Outsourced software engineering;
  5. Ignoring industry standards;
  6. Building tightly coupled code with modularity Loosely coupled code is easy to modify or change; and
  7. Non-existent test suite, continuous integration and deployment

 

TD manifests itself in the following examples:

  1. Countless branches of same code base, often for single customer releases;
  2. Database SQL stored procedure spanning a 1000 lines of code (LOC);
  3. Continuous version releases to fix bugs;
  4. Frequent patching of a legacy system; and
  5. Very large methods within

Figure 1: The author’s depiction of technical debt

2.    What is extreme programming?

Kent Beck [5] is widely considered as the creator of the XP software development methodology. XP is a frontrunner to many of today’s more modern software development methodologies. It is a lightweight, practical methodology with a focus on software quality and responsiveness to stakeholder requirements and change.

Intensive requirement analysis and extensive documentation are omitted from the process. Teams are small and focused, with a heavy emphasis placed on simplicity and short development cycles. User stories are a core doctrine of XP. Projects are initiated by the final users creating user stories describing the behaviour and functionality of the software system. Before any coding begins, functional testing of the requirements is conducted, with automated testing throughout the lifecycle of the project. Code is constantly refactored to reduce complexity, drive efficiency and adhere to standards. The result is extensible and maintainable code.

Good relationships between software engineers and stakeholders are also considered a cornerstone of XP as defined in the XP value system: communication, simplicity, feedback, courage and respect.

A software engineering team does not necessarily follow every XP practice. Self-organizing teams use and follow the XP practices that suit them at a point in time. As the software engineering team grows in maturity, so more XP practices are incorporated or omitted.

Given that most software systems are in a state of flux, XP adapts with the software system without being a technical barrier. XP manages this by having both technical and management practices. There are several XP practices, but for the purposes of this blog, the primary practices of XP are broken down in Table 1.

Table 1: Primary extreme programming practices

2.1  Modularity violations as a cause of technical debt

When code follows a good design and strictly adheres to standards such as the SOLID principals [6], changes to one module do not affect another module. A module can be described as a “unit whose structural elements are powerfully connected among themselves and relatively weakly connected to elements in other units.” [7]. This inherently creates an orthogonal design.

“Orthogonal design is the union of two principles, cohesion and coupling [8]”. To be more specific, loosely coupled code creates high cohesion, which makes responding to changes very easy and predictable. Positive side-effects of orthogonal design lead to clear and logical structure of relationships and create significant reuse.

Modularity violations occur when changes to one module unexpectedly induces changes to another module. This is not orthogonal design and a key indicator of TD. Changes to the code are not easily made and time estimation becomes inaccurate.

These violations must be identified and rectified by relentless refactoring, a cornerstone practice of XP. Strict and comprehensive coding standards must underpin the refactoring efforts of code, another important XP practice. The two XP practices complement one another and when used together lead to a direct and significant decrease in TD.

In the research paper Comparing Four Approaches for Technical Debt Identification, substantive findings are made by Zazworka et al. A very high correlation exists between Modularity violations and change prone classes, so much as 85% as shown in Table 2 below.

Table 2: Correlation between Modularity Violations and change prone classes [9]

2.2  Technical debt estimation variance

An integral function of software engineering is estimate development time for maintenance and new components. TD can have a significant impact on such estimations. Variances in estimation are depicted by the Cone of Uncertainty. Instead of being certain of the estimations given, TD creates increased uncertainty at the later stages of the lifecycle, where estimates should be more accurate (shown in Figure 2). This negatively impacts cost, productivity and project schedules.

Figure 2: Adapted Cone of Uncertainty impacted by technical debt [10]

Deadlines frequently being missed and so-called unforeseen changes to the code during the development cycle are indicative of time estimation variance due to TD. Accurately predicting development duration is not an easy task, TD makes this exponentially more difficult. Most estimates are given at the start of the phase, when TD is encountered estimates become inaccurate due to the extra work effort involved.

2.3  Measurement of Technical Debt

In the research recorded in Measuring Architectural Technical Debt [11], Kuznetcov defines TD as follows:

[TD] Principal – cost of eliminating TD (Code Debt, Design Debt etc.).

[TD] Interest probability – the probability that the type of TD will have an impact on the system (likelihood of imminent refactoring).

[TD] Interest amount – the extra cost incurred for addressing the TD (cost of modifying a module earmarked for refactoring as opposed to the cost of modifying after refactoring).

TD : { TDprincipal, TDinterest }

           man hours           work incurred           loss of productivity

TD can be expressed as an array where TDprincipal is independent of TDinterest. In keeping with the debt metaphor, the principal debt can be higher that the interest incurred or vice versa.

Various approaches are available to calculate the TD principal. Curtis et al. describes the TD principal value as a function of must-fix problems, the time required to fix, and the cost of the fix. Source code analysis provides actual counts and values calculated against input variables and the code itself. In the code, structural problems, modularity violations, LOC and duplication (to name but a few) all provide input for this analysis. TD can then be expressed as a product of this source code analysis. The principal amount of TD can be calculated using the high-level formula:

TDprincipal = Nmust-fix issues x Ttime required x Ccost incurred

2.4  Technical debt mapped to XP practices

All software systems develop some pain points over time. These pain points are based on the ISO 9126 [12] standard. (This standard has been revised by the ISO/IEC 25010 [13] standard.) A generic indicator of software quality can be defined by six characteristics: functionality, reliability, usability, efficiency, maintainability, portability [12].

These points are dealt with as they arise, and they are also indicators of TD. In Figure 3,  the author adapted the original diagram by Zazworka et al. [14] to reflect a mapping with the relevant mitigating XP practices.

By analysing the characteristics of each XP practice and variants of TD, a mapping can be made of the XP practices that effectively reduce the relevant form of TD.

Figure 3: Adapted Technical Debt landscape [14]

2.4.1  Coding standards – Directly influence the actual structure, composition and interaction of a software system at its lowest level. This can include guidelines and rules that the software engineer has to follow when writing code. This important practice touches on every part of the design, writing and testing of code.

2.4.2  Continuous integration – Facilitates in the feedback of system health and interaction in a continuous and predictable manner. As code is written and checked into version control, an extensive process of compiling, testing and deploying is started. Feedback is provided from all these sub-processes. Processes fail fast and provide immediate information.

2.4.3  Incremental design – Allows for the evolution and advancement of the software system in small manageable and measurable increments. Small releases are easy to scrutinize for TD. Also, small frequent releases are highly productive and stakeholders perceive a growing healthy software system.

2.4.4  Refactoring – Allows for the evolution and advancement of the software system on a code level and keeps the system at an optimal level of engineering excellence. If the code is frequently refactored, it becomes familiar to the software engineering team and TD is dealt with proactively.

2.4.5  Test-first programming – Software defects are kept at a minimum. The addition of new functionality is made simpler and more visible due to observable and trackable test results. As all the tests pass, the software system’s integrity is maintained and confirmed. As the system grows, the test suite grows too.

3.      Findings

Research indicates that there are various approaches to measure and indicate the extent of TD. This applies to both TDprincipal and TDinterest as described in Section 2.3. Only once there is a manifestation of a standard approach, can tools like SonarQube [15] come to fruition and be more scientific [11]. The SQALE[16] method has been widely adopted as a standard way of measuring and expressing TD. The adoption allows for increased feedback and evolution of the method.

Apart from the common causes of TD discussed in this paper, disconnect between technical staff and management is a major cause of project overrun and subsequently TD. This is exacerbated by the lack of further learning by middle management. Middle management also influences technology decisions and lack of understanding technology leads to an increase in TD.

When XP is implemented effective mitigation of TD occurs naturally. XP has had a significant impact on software engineering in the last decade. CI, Refactoring and Test-first programming have been instrumental in this, especially in the management of TD [17].

Each practice addresses one or more forms of TD [18]. Coding standards combined with refactoring and static code analysis directly increases code quality. Refactoring also aids in the writing of unit tests. Further, if done correctly, code will always be in a better state than before refactoring took place. Every new feature is built off a better base than the previous feature — this is true continued improvement. CI reduces the cost of integration and provides continuous feedback, and mitigates operational debt. Incremental design allows for the focus to be on functionality needed now, small increments and modularity counter design debt. Effective use of Test-first programming dramatically reduces defects and increases code quality, this mitigates code and test debt [19, 20].

In previous research [21] conclusive empirical evidence was provided in the written responses of interviewed software engineers. Applying a points system to the ranked XP practices provides a clear hierarchy of effective- ness. The Coding Standards XP practice is ranked first by a substantial margin. Refactoring and incremental design follows in rank respectively. Lastly, test-first Programming and continuous integration occupies the last two positions respectively.

Table 3: XP practices ranking

4.      Conclusion

The five XP practices are intrinsically linked in their ability and effectiveness in the mitigating of TD. The effect of following all these practices drastically reduces levels of TD. The power of these XP [22] practices lie in fact that they complement and follow on from each other. Most modern agile methodologies heavily incorporate these five practices, a testament to their importance.

In its simplest form: “Technical debt is the difference between what was promised and what was actually delivered” [23]. This difference can not only be very difficult to measure, but also difficult to find the root cause for the difference. TD will always be incurred at varying degrees of cost. An argument must be made that a software system is born from the code itself, not the idea of a software system. The code captures not only the core business idea or solution to the problem, but the reasoning and cognitive ability of a group of people, an amalgamation of intellect. Code must be guarded, maintained, optimized and treated with a level of respect. The empirical evidence presented substantiates that the five XP practices aid in this.

With time the measurement of TD and tools reporting on TD will increase in sophistication. There is no single solution to measure, report and reduce TD but rather a combination of measures and practices. The astonishing amount of research papers and commercial literature on the subject is proof of this.

Adapting to change is crucial to any software system evolving over time. Empirical evidence shows that this evolution can have negative consequences, but the practices we use to measure and mitigate the impact are also evolving in their effectiveness.

REFERENCES

[1]  W.Cunningham. “The WyCash portfolio management system.” ACM SIGPLAN OOPS Messenger , vol. 4, pp. 29–30, 1993.

[2]  Debt Metaphor – Ward Cunningham. URL https://www.youtube.com/watch?v=pqeJFYwnkjE. Last accessed: 1 October 2016.

[3]  Software Entropy. URL https://en.wikipedia.org/wiki/Software_entropy. Last accessed: 1 October 2016.

[4]  N. Ford. Evolutionary architecture and emergent design: Investigating architecture and design. IBM developerWorks. URL http://www.ibm.com/developerworks/java/library/j-eaed1/index.html.  Last accessed: 1 October 2016.

[5]  K. Beck and C. Andres. Extreme Programming Explained: Embrace Change, 2nd Edition. Addison Wesley, 2005.

[6]  Solid principles. URL https://en.wikipedia.org/wiki/SOLID_(object-oriented_design). Last accessed: 1 October 2016.

[7]  C. Y. Baldwin and K. B. Clark. Design Rules: The power of modularity. The MIT Press, 1999.

[8]  J. Coffin. Cohesion and Coupling: Principles of Orthogonal, Object-Oriented Programming. URL http:// www.jasoncoffin.com/cohesion-and-coupling-principles-of-orthogonal-object-oriented-programming/. Last accessed: 1 October 2016.

[9]  N. Zazworka, A. Vetro, C. Izurieta, S. Wong, Y. Cai, C. Seaman, and F. Shull. “Comparing Four Approaches for Technical Debt Identification.” Tech. rep., 2016. URL https://www.cs.montana.edu/ courses/esof522/handouts_papers/TDLandscape.pdf.

[10]  B. W. Boehm. Software Engineering Economics. 1981.

[11]  M. Kuznetcov. Measuring Architectural Technical Debt. Master’s thesis. URL www.ru.nl/publish/pages/ 769526/z-mscis-s4340132-mkuznetcov-2014-08-28.pdf.

[12]  ISO 9126 standard. URL http://www.iso.org/iso/catalogue_detail.htm?csnumber=22749. Last accessed: 1 October 2016.

[13]  ISO/IEC 25010 standard.     URL http://www.iso.org/iso/home/store/catalogue_ics/catalogue_ detail_ics.htm?csnumber=35733. Last accessed: 1 October 2016.

[14]  N. Zazworka. Technical Debt. URL http://www.nicozazworka.com/research/technical-debt/. Last accessed: 1 October 2016.

[15]  SonarQube. URL http://www.sonarqube.org/. Last accessed: 1 October 2016.

[16]  SQALE. URL http://www.sqale.org. Last accessed: 1 October 2016.

[17]  N. Brown, Y. Cai, Y. Guo, R. Kazman, M. Kim, P. Kruchten, E. Lim, A. MacCormack, R. Nord, I. Ozkaya, R. Sangwan, C. Seaman, K. Sullivan, and N. Zazworka. “Managing Techni- cal Debt in Software-Reliant Systems.” Tech. rep. URL https://pdfs.semanticscholar.org/f754/ db80f0e465cfcad4077c5703ff1cdfd8e902.pdf.

[18]  J. Holvitie, V. L. nen, and S. Hyrynsalmi. “Technical Debt and the Effect of Agile Software Development Practices on It – An Industry Practitioner Survey.” Tech. rep. URL http://conferences.computer.org/ mtd/2014/papers/6791a035.pdf.

[19]  C. Sterling. Managing Software Debt – Building for Inevitable Change. Addison Wesley, 2010.

[20]  J. C. Sanchez, L. Williams, and E. M. Maximilien. On the Sustained Use of a Test- Driven Development Practice at IBM. URL https://pdfs.semanticscholar.org/a00c/ 61b77e2df21b43d5e500341d5efec286c195.pdf. Last accessed: 1 October 2016.

[21]  C. Fourie. “EXTREME PROGRAMMING PRACTICES THAT MITIGATE TECHNICAL DEBT.” Tech. rep., School of Electrical and Information Engineering, University of the Witwatersrand, 2016.

[22]  Extreme Programming . URL http://www.extremeprogramming.org. Last accessed: 1 October 2016.

[23]  Escaping the black hole of technical debt. URL https://www.atlassian.com/agile/technical-debt. Last accessed: 1 October 2016.

by Carel Fourie

Ideas

With the ubiquity of electronics, going from an idea to a working prototype has become cheaper and easier than ever before. The biggest problem with ideas is that everyone has them but most people do not have the drive and persistence needed in order to turn their ideas into a reality. To add further insult to injury it is also a challenge ensuring that the ideas one has chosen are good and not just a waste of time and money.

Source: https://static.pexels.com/photos/192637/pexels-photo-192637.jpeg

With the ubiquity of electronics, going from an idea to a working prototype has become cheaper and easier than ever before. The biggest problem with ideas is that everyone has them but most people do not have the drive and persistence needed in order to turn their ideas into a reality. To add further insult to injury it is also a challenge ensuring that the ideas one has chosen are good and not just a waste of time and money.

Idea selection

In terms of selecting an idea it is worth taking a step back and examining examples of good ideas and what made them so successful. Throughout history man has not changed too much from an evolutionary perspective. As a result, man’s desires and needs have also not changed too much either. The key thing that has changed is the technology available that that has enabled us to implement concepts to fulfill these needs and desires in different ways. Fiat money was first developed not because it was a good idea but rather because it became impractical to carry heavy goods and gold around to barter with. Another more recent example would be the development of Google and Wikipedia. Prior to the internet people would have used encyclopedias and libraries to research anything they needed, but the advent of the internet has allowed Google and Wikipedia to be developed to more efficiently and broadly spread this knowledge. With this observation in mind, that an idea can be successful if people find it useful, we get a tool we can use to help sift through our ideas. To use this we can just check if an idea we have will be useful to enough people, and if so, see how technology can help us pull this off to great effect.

Execution

Once an idea has been selected the execution stage can begin. Many methodologies have arisen to make the process of building ideas more scientific. Lean startup methodologies are one of the popular approaches in the startup space while agile provides similar concepts for software development. No matter the approach generally they encourage people to come up with a hypothesis and decide on the smallest possible chunk of this hypothesis they need in order to make what is known as the MVP or minimum viable product. All bloat is removed in favour of the smallest possible grain of the idea that we can build so that we can get it into the hands of customers as fast as possible. Small development cycles are advocated so that we can get feedback on the idea quickly and based on the feedback validate our hypothesis, and tweak it a bit more or completely change direction by pivoting.

One story that illustrates the power of small iterations comes from a book called: “Art & Fear: Observations on the Perils (and Rewards) of Artmaking” by David Bayles and Ted Orland:

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pound of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”. Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

What we can infer from this is that the faster we can test more ideas, the faster we can start perfecting our process and in so doing eventually hit upon the best ideas.

Constraints

When building something it is very valuable to draw a line in the sand in terms of both time and money. If we have no deadline we may never finish, so putting a firm deadline in the sand helps us weed out unnecessary features to end up with our MVP and pushes us to make our development cycles as short as possible. Y Combinator (a company that provides early stage funding and assistance to startups) for example gives companies they fund just enough money to act as seed funding and 10 weeks to build a working prototype after which they present it to potential investors and acquirers. With unlimited funds and time, we are more likely to keep adding unnecessary features and deviate away from the MVP we decided upfront.

On a much smaller scale and from a personal perspective I decided I wanted to start building up an online presence with my own personal blog. I wasted time getting lost in the details and the technologies available without writing a single article. In the end I gave myself a deadline of two weeks from that point and decided my main aim was about the articles I wanted to start writing and not so much about the technology behind it. So I ended up using the cloud computing provider DigitalOcean and used one of their pre-built vanilla Ghost blogging platform deployments to get up and running ASAP. In the end putting this time constraint in place forced me to get on the right track.

Coming up with good ideas is tougher than it may seem. Many people have ideas but not all that many can go from idea to finished product. By looking at existing ideas one can get a feel for what makes a good idea — generally it is something that people really need as they find it useful. A number of methodologies have come to light which guide in validating an idea as fast as possible. Giving ourselves constraints helps keeps us honest and working towards a reasonable deadline. In the end if we can iterate through our ideas and validate them as fast as possible we are more likely to come upon a successful one. Thomas Edison sums it up best in his response to a reporter on their jeering comment about the number of times he failed: “I have not failed. I’ve just found 10,000 ways that won’t work.”

 

by  Yair Mark

The Greedy King

There once ruled a king named C. He spoke a simple language. His subjects stood in awe of his greatness, until his child, whom they called C++, took his place on the thrown. His seat was warm and none would dare challenge his reign. But in a distant land there were whispers of an abomination. A thing they called Java (for lack of a better name). Java was huge! He was bloated, verbose and ran on a Virtual Machine, making him virtually indestructible! The kingdom of C++ was a messy one, while Java was clean because it had its own personal garbage collector. It was sloppy, but it got the job done. Java was a greedy king, he knew he had the numbers and he craved the power that C++ had. So he declared war. Needless to say, Java defeated C++ and most of C++’s subjects now followed Java and played by his rules.

There once ruled a king named C. He spoke a simple language. His subjects stood in awe of his greatness, until his child, whom they called C++, took his place on the thrown. His seat was warm and none would dare challenge his reign. But in a distant land there were whispers of an abomination.  A thing they called Java (for lack of a better name). Java was huge! He was bloated, verbose and ran on a Virtual Machine, making him virtually indestructible! The kingdom of C++ was a messy one, while Java was clean because it had its own personal garbage collector. It was sloppy, but it got the job done.  Java was a greedy king, he knew he had the numbers and he craved the power that C++ had. So he declared war. Needless to say, Java defeated C++ and most of C++’s subjects now followed Java and played by his rules.

https://img.memesuper.com/bd3e69d22814c715ceb98d99b0d38943_-java-sparta-abc-memes-memes-java_600-597.jpeg

There existed a tiny island called Lambda, where only the most intelligent lived. They were a peaceful little nation of men that spoke with a Lisp. Nobody ever bothered them. Nobody ever saw them as a threat. They developed languages that only they could comprehend, much different from what their fellow man at the kingdom of Java spoke. These mad mathematicians were developing a virtual machine that would end all virtual machines. They were crafting a language that sat atop the VM that would allow vast amounts of concurrency and speed within distributed systems. Could these mere mortals possibly have possessed the ability to see into the future?

As time went by, the great kingdom of Java grew and grew. The Java Virtual Machine (JVM) was improved and things were dandy. Until they weren’t.

The world had grown to love Java and all it stood for. They had become blinded to the perils of the modern world. The world was connected with billions of people sending data back and forth.  Java could not cope. His subjects developed gruesome methods to try deal with the concurrency issues, but Java just couldn’t handle the load. Woe to all those that did not seek shelter from the coming tempest.

https://img.memesuper.com/b960114da3aa42880b2abfb6b0d9f2bf_learn-java-they-said-itll-be-memes-java_625-468.jpeg

On one faithful day, a trade boat had just returned from Lambda with great news of a paradigm they’d called Functional Programming. The JVM was flexible, and so the “bright minds” of the land built functional languages that would run on it. “My king, forgive us. We could not match the power of the languages at Lambda.” Alas, the king found clojure in his new language, Scala.

https://memegenerator.net/You-CanT-If-You-DonT

Word of functional programming travelled across the land and new languages sprouted. These languages operated in much the same way. They used a technique called message passing to instruct on what to do next. They used models to keep state immutable, meaning that something said about a particular thing could never be changed. You would have to create a new thing with new traits. No take backs. If you said that Chihuahuas were small, then they would forever be small. You would have to create a new breed of dog entirely, with a new size. This is what they call state. The state of a Chihuahua is small. If that fact was mutable and we allowed everyone to change it then we would never know what the end result would be. A giant Chihuahua, maybe?

https://impossiblehq.com/wp-content/uploads/2013/04/Final-Form.jpg

This is what made functional languages so predictable… immutable state.

Back at Lambda, there was an ancient city that stood at the mouth of an active Volcano they named Ericson. It was here that the Erlang Virtual Machine (BEAM) was born. It was perfect in every way. Rigid, but never arrogant. It could handle concurrency in a manner never before seen. The people of Mount Ericson spoke Erlang, a tongue which possessed vasts amounts of inner beauty, beneath its ugly vineer. It never quite took off, until one day a vagabond strayed into the city, seeking refuge. He was a Brazilian programmer that spoke Ruby. A language much like English, create by a Japanese man.

http://i0.kym-cdn.com/entries/icons/facebook/000/018/489/nick-young-confused-face-300x256_nqlyaa.jpg

It was an abomination in disguise, but had its merits. The vagabond, José Valim, was talented and quickly picked up Erlang. He began to change the language and fuse it with his own. It was from this fusion that the world was blessed with Elixir.

Java continued his dictatorship. His sheeple were like mindless zombies, writing line after line of fault-ridden code. Systems crashed, companies closed down, coders became depressed. They grew lazier by the day.

José travelled all over the world, preaching of his new dialect. He spoke of salvation, a place where all programmers could write better code, in far fewer lines. He delivered great sermons of a Fault-Tolerant way; supervisors which would watch over your delicate code and make sure that it behaved as expected. He promised no more multithreading, and offered a new approach called parallel processing. So many great things, falling, alas, on so many deaf ears.

And so, the world remained in a deadlock, ruled by a king supported by the wealthiest in the land. His followers too afraid to change… too lazy to adapt.  No one man should have all that power.

However, one by one, the eyes of the blind opened and functional languages became more popular.

Java had a cousin, dubbed Javascript, Guardian of the Front-End. He was loved, quick to react and drove a V8. Java and Javascript once battled over control of the web. Java, being slow and bloated, lost. Javascript saw the coming change and decided to add functional programming to his skill-set. He showed the coders of the world that functional can be better and faster.  Java was being outmatched in almost every area he once excelled, even losing his grip on his ability to program Androids.

Every great empire falls, and Java knew he would soon be overthrown. It was just a matter of time.

The End.

by Sherwin Hulley

The Power of the Unconversation

On the 9th of March 2017 twelve enthusiastic Foundery members attended DevConf 2017, South Africa’s biggest community driven software development conference: an event that promised learning, inspiration and networking.

Courtesy of DevConf 2017 (devconf.co.za)

On the 9th of March 2017 twelve enthusiastic Foundery members attended DevConf 2017, South Africa’s biggest community driven software development conference: an event that promised learning, inspiration and networking.

With a multi-tracked event such as this one there is usually something for everyone, and yet if you speak to serial conference attendees (guilty as charged), the talks aren’t the greatest reason to attend.

People like me go to conferences in part for the scheduled content, but mostly for the unscheduled conversations in the passage en route to a talk or around a cocktail table during a break. The “unconversations”, I’m calling them. It’s the conference equivalent of another well-known creative outlet: “water cooler conversations”.

I’ll admit that I’m a bit of a conference butterfly – actively seeking out these “unconversations” so that I can join them. I especially take note as crowds disappear into conference rooms. I’m drawn to the groups of people who stay behind wherever they might have gathered. That’s where I’m almost guaranteed to participate in really interesting discussions and learn something new. When I attend conferences, it’s this organic and informal style of collaborative enquiry I look forward to the most.

Courtesy of DevConf 2017 (devconf.co.za)

Ironically it was one of the DevConf talks that helped me understand why these “unconversations” tend to work so well as creative spaces. In his talk on Mob Programming, Mark Pearl mentioned a study conducted by the American Psychological Association which established that groups of 3-5 people perform better on complex problem solving than the smartest person in the group could perform on their own. See “references” for more information.

Loosely translated, a group of people has a better shot of solving a complex problem together than if they tried to solve it independently.

As a Mob Programming enthusiast myself, this makes complete sense to me. What’s interesting is that this research is not new, yet many organisations still discourage “expensive” group-work and continue to reward individual performance, and I can see why. For people with similar upbringings and educational backgrounds to mine, this is the comfort zone. We default to working alone and feel a sense of accomplishment when we achieve success individually. As children we were told to solve problems and find answers on our own. Receiving help was a sign of weakness, and copying was forbidden.

In contrast, the disruptive organisations of the last few decades encourage the complete opposite. These organisations recognise the value of problem-solving with groups of people who have varying, and even conflicting, perspectives. There’s no time for old-school mindsets that favour individual efforts over collaboration. We need to cheat where it’s appropriate by knowing who can help us and what existing ideas we can leverage.

I don’t mean to trivialise it. There’s a bit more involved than just creating opportunities for people to solve problems in groups. According to the book “Collective Genius”, innovative companies such as Google have developed three important organisational capabilities: creative abrasion (idea generation by encouraging conflict and high quality feedback), creative agility (hypothesizing, experimenting, learning and adapting) and creative resolution (deciding on a solution after taking new knowledge into account) all supported by a unique style of leadership. The case studies are incredibly motivating.

Since joining the Foundery I’m discovering that we are practicing these things every day, and the amazing ideas and products born from our “collective genius” serve as confirmation that we’re on the right track. Is it always easy? No, absolutely not. It’s requires a great deal of mindfulness.

When I’m reflective I notice that the greatest ideas and most creative solutions I’ve brought to life were conceived with input from others. Many of the dots I connected for the first time happened during completely unlikely meetings of minds, and some through passionate differences of opinion. In an environment that calls for constant collaboration, it’s wonderfully refreshing to find that the “unconversations” I enjoy so much are happening all around me, every day.

And so long as I’m participating, I am always reminded that together we are more capable of solving really complex problems than the smartest one among us, and I’m becoming more and more OK with that.

References:

By Candice Mesk

 

The changing world around programmers

In today’s ever-changing world, we find that businesses have become more concerned about what you can do rather than what qualification you have.

Gabriel blogIn today’s ever-changing world, we find that businesses have become more concerned about what you can do rather than what qualification you have. This paradigm is becoming more apparent as companies have an unbelievable shortage of decent coders who are able to deliver to their expectations. This gap in the employment market is increasing as the average university turnout of BSc Computer Science graduates is far less than actual demand.

 This situation has led the industry to change the way they look at qualifications and to focus more on a person’s ability to code and learn. If you are a self-taught coder and have an understanding of industry-relevant technology, you are in a much better position than someone who still has to go into university and learn coding there for the first time. A few companies are willing to take the risk of hiring someone without formal coding qualifications, and have reaped the rewards in taking those risks. The coders that they hire generally seem to be more aware of what new technology is available, and are more willing to learn something new in order to help them grow further.

 We are starting to see a paradigm shift in the industry and the way in which people think. The stack overflow statistics show that the proportion of self-taught developers increased from 41.8% in 2015 to 69.1% in 2016. This shows that a lot of developers are self-taught and a lot more people are teaching themselves how to code each year. People who start to code from a young age show such passion for coding and in combination with their curiosity for learning something new, their love for it speaks volumes. To have the ability to create anything that they can think of on a PC, and to manipulate a PC to behave like they want it to and have a visual representation of this, is unbelievable.

 For those interested in teaching themselves how to code there are many websites to look at. Here is a list of 10 places you can learn coding from, but I will list the top 3 places that I learnt the most from:

Those websites have their own way of teaching code and if youcombine this with some Youtube videos from CS50 and MIT OpenCourseWare you will be all set to learn at your own pace. Hackerrank is a good way to test everything you learnt and you can see how you rank against the world.

 WeThinkCode_ is an institution to learn coding, for anyone from ages 17-35 years old. Their thinking is that you do not need to have a formal qualification to be a world class coder. More institutes like this are opening across the world. Having a wide age gap illustrates that you are never too old to learn how to code. There are also more and more coding education opportunities for young people. It is really easy to learn how to code from a young age as that is when your mind is at its prime to learn new things and adjust to constant change.

 In a programmer’s world you are constantly learning new things and this is what makes our jobs exciting.

The world is ever-evolving and we all need to keep adjusting our mindsets on how we look at things, otherwise we will be left behind while everyone moves forward.

By Gabriel Groener

The Modern Programmer

IT professionals often don’t get an honest portrayal in the entertainment industry and, for better or worse, the mass perception of Computer Science has been influenced by what people see on their TV screens. Either we sit in a dingy dark room, littered with empty energy drink cans, staring at a terminal with green font flashing and passing by at light speed – with sound effects, or we are cool rich guys creating programs that become self-aware.

IT professionals often don’t get an honest portrayal in the entertainment industry and, for better or worse, the mass perception of Computer Science has been influenced by what people see on their TV screens. Either we sit in a dingy dark room, littered with empty energy drink cans, staring at a terminal with green font flashing and passing by at light speed – with sound effects, or we are cool rich guys creating programs that become self-aware. There really isn’t a middle ground and these perceptions either drive people to developing an insatiable curiosity in the field or becoming fearful and believing that they aren’t mentally fit to join the club.

http://i.imgur.com/heb9csO.jpg
http://i.imgur.com/heb9csO.jpg

The demographic of the modern programmer isn’t what it was back in the 70’s. Most IT professionals were – well…Professionals. They were mathematicians, engineers, scientists, accountants, etc. often in their 30’s or 40’s. The programming industry was almost 50% women. What on earth happened?

Well, I have a theory. Computer Science (CS) wasn’t a course at any universities at that time, so youngsters really had no way of entering the field. Not to mention the fact that what they called a computer back then isn’t what we have today. They were big, expensive and obviously fewer. There were no operating systems. They wrote code by hand which was then converted into punch cards that could be fed into the computer and you had better pray that what you wrote was correct – which, if you code, you know it often isn’t – because then you would have to start that lengthy process from scratch. Blessed are those that came before us, for they were a resilient few. By the time we had a CS course it was the 80’s and young adults could learn how to code.

http://i.imgur.com/27vs3iD.jpg
http://i.imgur.com/27vs3iD.jpg

The 80’s was definitely one of the most defining times in modern history. We saw technology really being embraced in the media. Back to the Future, Ghostbusters, Star Wars, Terminator and many more franchises showed us a world of technology that seemed almost impossible. In lots of ways we are still catching up the imaginations of the filmmakers and science fiction writers. But I find this time very interesting because it gave birth to the geek culture which has lasted to this day. This culture was very young and male dominated. It was a kind of cult to those who were part of it. This must have driven the women away. Women in general still don’t get the culture. Heck, even I don’t get it to the degree of hardcore followers. Now think about how we perceive these “geeks” in society. Beady eyed, brace faced, drooling, good-grade-getting teens with bad acne (is there good acne?) and thick glasses, always getting bullied by the “jocks”. Truth is, in a quest to fit in, teens only hang out with the group that they relate to and/or accepts them. Learning became the uncool thing and Disco was in. The media neatly crafted and packaged nerd culture. Being a cool kid meant you didn’t even greet the nerd – unless shoving someone into a wall counted as a greeting. And so that was that. Programmers were part of a culture that embraced creativity, logic and intelligence and frowned upon anything less, because in order to be a programmer you needed to love learning and solving problems. Being a cool kid meant you had to love partying, gossip and creating problems.

http://www.philiployd.com/wp-content/uploads/2016/04/geek.jpg
http://www.philiployd.com/wp-content/uploads/2016/04/geek.jpg

Things have changed somewhat. Programmers today come in different shapes and sizes. Still not many hourglass shapes, but we’re getting there. The next generation of teens will definitely be more in-tune with technology and the true culture of the geek or the “hacker”. Those that fail to see the power of new technologies will be left behind. Computers are so much more accessible and all schools are starting to teach coding. With innovative colleges like We Think Code and 42, the future of what we perceive as an IT professional will be completely different to what we have today.

we-think-code-banner (003)

It’s now up to us to make sure that our kids become programmers rather than the programmed. It’s in the small things that we spot the young coder. The little kid that breaks his/her toys to find out how they work. Kids are naturally curious and it’s up to us to nurture that curiosity and not reprimand or punish them for it. We interact with technology every day and we would only be empowering them by encouraging them to learn how to control that technology as creators in the same way that we might teach them how to play a musical instrument. I envision a world where the modern programmer is anyone, in a society that frowns on those that shun learning. Let’s make it happen.

by Sherwin Hulley