Ideas

With the ubiquity of electronics, going from an idea to a working prototype has become cheaper and easier than ever before. The biggest problem with ideas is that everyone has them but most people do not have the drive and persistence needed in order to turn their ideas into a reality. To add further insult to injury it is also a challenge ensuring that the ideas one has chosen are good and not just a waste of time and money.

Source: https://static.pexels.com/photos/192637/pexels-photo-192637.jpeg

With the ubiquity of electronics, going from an idea to a working prototype has become cheaper and easier than ever before. The biggest problem with ideas is that everyone has them but most people do not have the drive and persistence needed in order to turn their ideas into a reality. To add further insult to injury it is also a challenge ensuring that the ideas one has chosen are good and not just a waste of time and money.

Idea selection

In terms of selecting an idea it is worth taking a step back and examining examples of good ideas and what made them so successful. Throughout history man has not changed too much from an evolutionary perspective. As a result, man’s desires and needs have also not changed too much either. The key thing that has changed is the technology available that that has enabled us to implement concepts to fulfill these needs and desires in different ways. Fiat money was first developed not because it was a good idea but rather because it became impractical to carry heavy goods and gold around to barter with. Another more recent example would be the development of Google and Wikipedia. Prior to the internet people would have used encyclopedias and libraries to research anything they needed, but the advent of the internet has allowed Google and Wikipedia to be developed to more efficiently and broadly spread this knowledge. With this observation in mind, that an idea can be successful if people find it useful, we get a tool we can use to help sift through our ideas. To use this we can just check if an idea we have will be useful to enough people, and if so, see how technology can help us pull this off to great effect.

Execution

Once an idea has been selected the execution stage can begin. Many methodologies have arisen to make the process of building ideas more scientific. Lean startup methodologies are one of the popular approaches in the startup space while agile provides similar concepts for software development. No matter the approach generally they encourage people to come up with a hypothesis and decide on the smallest possible chunk of this hypothesis they need in order to make what is known as the MVP or minimum viable product. All bloat is removed in favour of the smallest possible grain of the idea that we can build so that we can get it into the hands of customers as fast as possible. Small development cycles are advocated so that we can get feedback on the idea quickly and based on the feedback validate our hypothesis, and tweak it a bit more or completely change direction by pivoting.

One story that illustrates the power of small iterations comes from a book called: “Art & Fear: Observations on the Perils (and Rewards) of Artmaking” by David Bayles and Ted Orland:

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pound of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”. Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

What we can infer from this is that the faster we can test more ideas, the faster we can start perfecting our process and in so doing eventually hit upon the best ideas.

Constraints

When building something it is very valuable to draw a line in the sand in terms of both time and money. If we have no deadline we may never finish, so putting a firm deadline in the sand helps us weed out unnecessary features to end up with our MVP and pushes us to make our development cycles as short as possible. Y Combinator (a company that provides early stage funding and assistance to startups) for example gives companies they fund just enough money to act as seed funding and 10 weeks to build a working prototype after which they present it to potential investors and acquirers. With unlimited funds and time, we are more likely to keep adding unnecessary features and deviate away from the MVP we decided upfront.

On a much smaller scale and from a personal perspective I decided I wanted to start building up an online presence with my own personal blog. I wasted time getting lost in the details and the technologies available without writing a single article. In the end I gave myself a deadline of two weeks from that point and decided my main aim was about the articles I wanted to start writing and not so much about the technology behind it. So I ended up using the cloud computing provider DigitalOcean and used one of their pre-built vanilla Ghost blogging platform deployments to get up and running ASAP. In the end putting this time constraint in place forced me to get on the right track.

Coming up with good ideas is tougher than it may seem. Many people have ideas but not all that many can go from idea to finished product. By looking at existing ideas one can get a feel for what makes a good idea — generally it is something that people really need as they find it useful. A number of methodologies have come to light which guide in validating an idea as fast as possible. Giving ourselves constraints helps keeps us honest and working towards a reasonable deadline. In the end if we can iterate through our ideas and validate them as fast as possible we are more likely to come upon a successful one. Thomas Edison sums it up best in his response to a reporter on their jeering comment about the number of times he failed: “I have not failed. I’ve just found 10,000 ways that won’t work.”

 

by  Yair Mark

A Blockchain Problem

Blockchain has been hailed as the next ‘big thing’, a term thrown around in social gatherings with the likes of ‘big data’, ‘cloud computing’ and ‘Internet of Things’. A crucial understanding of the true value of blockchain is sorely lacking in many minds, leading to wasted resources by some of the world’s top banks and tech firms.

https://i.imgflip.com/8w9ro.jpg

 

Blockchain has been hailed as the next ‘big thing’, a term thrown around in social gatherings with the likes of ‘big data’, ‘cloud computing’ and ‘Internet of Things’. A crucial understanding of the true value of blockchain is sorely lacking in many minds, leading to wasted resources by some of the world’s top banks and tech firms.

Having been dubbed another ‘solution without a problem’ by some of its critics, the hype surrounding blockchain appears to oscillate between inflated expectations, and the trough of disillusionment, never quite hitting the slope of enlightenment, and thus never approaching anything quite like a product.

So what good is it really?

If you’re not familiar with the Byzantine Generals problem, here’s a quick overview:

Nine Byzantine Generals are entrenched around a city. They are divided upon their current course of action: do they attack, or do they retreat? Failing to reach consensus, they decide to cast votes, each general sending a messenger to relay their choice to the other generals, where the majority decision will be the action to take.

Four of the generals vote to attack and four vote to retreat. The group is split in two: those in favour of retreating, who begin to strike down their tents, and those in favour of attacking, who gather on the frontlines. What of the last general? Well, he’s been bribed by the city’s leaders*. Rather than vote one way or the other, he dispatches two messengers; the first states that he will attack, the second states that he will retreat.

Four of the generals thus lead their troops into battle and suffer a stunning defeat. The other four generals retreat, dishonoured, with a significantly weakened army.

This story is an allegory of a concept we have come to know as double-spending. In traditional markets, every merchant keeps their own ledger of all transactions. This is also true in the world of digital payments. Some clever customers have been known to make multiple transactions to different vendors, essentially issuing IOUs without the actual means to make good on every transaction. Come time for settlement, vendors have delivered their items, banks are short and the clever customer has high-tailed it through a proxy**.

This is the main problem Blockchain is intended solve: a distributed ledger network in which all vendors and customers share the same record of transactions and balances. Incentivised ‘miners’ add new transactions to the pool, where only one block of transactions is accepted at a time, based on mutual consensus by other parties, effectively solving the double-spend issue.

Too bad the Byzantines didn’t have blockchain.

Perhaps we need to go back to basics and focus on utilizing blockchain for its original intended purpose. All the pieces are set for us to begin digitizing financial instruments, ushering in a new era of trusted peer-to-peer transacting. The only question remains; what happened to the early adopters?

by Stuart Allen

The Greedy King

There once ruled a king named C. He spoke a simple language. His subjects stood in awe of his greatness, until his child, whom they called C++, took his place on the thrown. His seat was warm and none would dare challenge his reign. But in a distant land there were whispers of an abomination. A thing they called Java (for lack of a better name). Java was huge! He was bloated, verbose and ran on a Virtual Machine, making him virtually indestructible! The kingdom of C++ was a messy one, while Java was clean because it had its own personal garbage collector. It was sloppy, but it got the job done. Java was a greedy king, he knew he had the numbers and he craved the power that C++ had. So he declared war. Needless to say, Java defeated C++ and most of C++’s subjects now followed Java and played by his rules.

There once ruled a king named C. He spoke a simple language. His subjects stood in awe of his greatness, until his child, whom they called C++, took his place on the thrown. His seat was warm and none would dare challenge his reign. But in a distant land there were whispers of an abomination.  A thing they called Java (for lack of a better name). Java was huge! He was bloated, verbose and ran on a Virtual Machine, making him virtually indestructible! The kingdom of C++ was a messy one, while Java was clean because it had its own personal garbage collector. It was sloppy, but it got the job done.  Java was a greedy king, he knew he had the numbers and he craved the power that C++ had. So he declared war. Needless to say, Java defeated C++ and most of C++’s subjects now followed Java and played by his rules.

https://img.memesuper.com/bd3e69d22814c715ceb98d99b0d38943_-java-sparta-abc-memes-memes-java_600-597.jpeg

There existed a tiny island called Lambda, where only the most intelligent lived. They were a peaceful little nation of men that spoke with a Lisp. Nobody ever bothered them. Nobody ever saw them as a threat. They developed languages that only they could comprehend, much different from what their fellow man at the kingdom of Java spoke. These mad mathematicians were developing a virtual machine that would end all virtual machines. They were crafting a language that sat atop the VM that would allow vast amounts of concurrency and speed within distributed systems. Could these mere mortals possibly have possessed the ability to see into the future?

As time went by, the great kingdom of Java grew and grew. The Java Virtual Machine (JVM) was improved and things were dandy. Until they weren’t.

The world had grown to love Java and all it stood for. They had become blinded to the perils of the modern world. The world was connected with billions of people sending data back and forth.  Java could not cope. His subjects developed gruesome methods to try deal with the concurrency issues, but Java just couldn’t handle the load. Woe to all those that did not seek shelter from the coming tempest.

https://img.memesuper.com/b960114da3aa42880b2abfb6b0d9f2bf_learn-java-they-said-itll-be-memes-java_625-468.jpeg

On one faithful day, a trade boat had just returned from Lambda with great news of a paradigm they’d called Functional Programming. The JVM was flexible, and so the “bright minds” of the land built functional languages that would run on it. “My king, forgive us. We could not match the power of the languages at Lambda.” Alas, the king found clojure in his new language, Scala.

https://memegenerator.net/You-CanT-If-You-DonT

Word of functional programming travelled across the land and new languages sprouted. These languages operated in much the same way. They used a technique called message passing to instruct on what to do next. They used models to keep state immutable, meaning that something said about a particular thing could never be changed. You would have to create a new thing with new traits. No take backs. If you said that Chihuahuas were small, then they would forever be small. You would have to create a new breed of dog entirely, with a new size. This is what they call state. The state of a Chihuahua is small. If that fact was mutable and we allowed everyone to change it then we would never know what the end result would be. A giant Chihuahua, maybe?

https://impossiblehq.com/wp-content/uploads/2013/04/Final-Form.jpg

This is what made functional languages so predictable… immutable state.

Back at Lambda, there was an ancient city that stood at the mouth of an active Volcano they named Ericson. It was here that the Erlang Virtual Machine (BEAM) was born. It was perfect in every way. Rigid, but never arrogant. It could handle concurrency in a manner never before seen. The people of Mount Ericson spoke Erlang, a tongue which possessed vasts amounts of inner beauty, beneath its ugly vineer. It never quite took off, until one day a vagabond strayed into the city, seeking refuge. He was a Brazilian programmer that spoke Ruby. A language much like English, create by a Japanese man.

http://i0.kym-cdn.com/entries/icons/facebook/000/018/489/nick-young-confused-face-300x256_nqlyaa.jpg

It was an abomination in disguise, but had its merits. The vagabond, José Valim, was talented and quickly picked up Erlang. He began to change the language and fuse it with his own. It was from this fusion that the world was blessed with Elixir.

Java continued his dictatorship. His sheeple were like mindless zombies, writing line after line of fault-ridden code. Systems crashed, companies closed down, coders became depressed. They grew lazier by the day.

José travelled all over the world, preaching of his new dialect. He spoke of salvation, a place where all programmers could write better code, in far fewer lines. He delivered great sermons of a Fault-Tolerant way; supervisors which would watch over your delicate code and make sure that it behaved as expected. He promised no more multithreading, and offered a new approach called parallel processing. So many great things, falling, alas, on so many deaf ears.

And so, the world remained in a deadlock, ruled by a king supported by the wealthiest in the land. His followers too afraid to change… too lazy to adapt.  No one man should have all that power.

However, one by one, the eyes of the blind opened and functional languages became more popular.

Java had a cousin, dubbed Javascript, Guardian of the Front-End. He was loved, quick to react and drove a V8. Java and Javascript once battled over control of the web. Java, being slow and bloated, lost. Javascript saw the coming change and decided to add functional programming to his skill-set. He showed the coders of the world that functional can be better and faster.  Java was being outmatched in almost every area he once excelled, even losing his grip on his ability to program Androids.

Every great empire falls, and Java knew he would soon be overthrown. It was just a matter of time.

The End.

by Sherwin Hulley

Do corporates need garages?

Innovation is easy right? You throw a few super smart, socially awkward people into a garage and wait until they emerge with some new technology that will change the world. And, of course, that’ll take their earthly belongings from a stash of Led Zeppelin vinyls, a collection of well-worn t-shirts, and no doubt one or two student loans (for degrees they never actually finished) to billions of dollars.

Would you like a garage with that?

Innovation is easy right? You throw a few super smart, socially awkward people into a garage and wait until they emerge with some new technology that will change the world. And, of course, that’ll take their earthly belongings from a stash of Led Zeppelin vinyls, a collection of well-worn t-shirts, and no doubt one or two student loans (for degrees they never actually finished) to billions of dollars. This worked for Apple, Amazon, Google, HP and Microsoft, so surely it’ll work for everyone right?

Proximity to Business

But what if you’re not a new kid on the block, but rather, are one of the incumbents of the industry? How feasible is it to confine a portion of your company to a dingy garage, and keep them running on a diet of stale pizza and a steady stream of lofty ideals? There is a school of thought that advocates a very similar approach to this, albeit more grown-up. Whereby a portion of the company is carved out, or formed, and given autonomy to experiment, invent and innovate to their heart’s content- unencumbered by the drudgery of meetings about meetings, and without any expectation of immediate results or potentially any results at all. The hope being that, in time, the gamble will pay-off sling-shotting the company to the forefront of a bold new wave within the industry.

At the other end of the potential scale, and it should be viewed as a scale (see below), is an internal entity that is clearly part of the organization, and targets the short time-to-value, incremental, mildly-disruptive types of innovation. This is sometimes appropriate, especially for innovation that focuses on links within an existing value chain. To use a simplistic example, from the automotive industry, it is exceptionally hard to invent a new type of indicator stalk without having a steering wheel, or steering wheel column, to attach any prototypes to, nor any actual indicator lights and electrical system to test whether it even works. And as your value chain gets more complex, it gets exponentially more difficult.

So perhaps the most critical element in choosing an approach should be dictated by what you’re wanting to innovate. Too often people are given the broad directive to innovate, without any specific focus, and with no appreciation of the independence of the portion they need to innovate. Big corporates got big because of a certain set of competencies, so often, to avoid throwing the baby out with the bath water, they’d opt to innovate portions of an existing value chain and that would then require closer collaboration (left edge of the scale above). One caveat though, is that you may need to rely on the parts of the value chain, and by implication the people running those parts, to test your innovation. An innovation, that may very well be trying to disrupt another portion for which they are also accountable, so they may actually prove to be obstacles to innovation. It is the corporate equivalent of attempting to get turkeys to vote for Christmas.

Reputation of Innovation Arm

The reputation of your innovation arm also dictates the most appropriate innovation portfolio. If your innovation arm is yet to win over the skeptics in the mothership company, then you may need some quick, incremental wins before you’ve earned your freedom to go after the long-shots. Obviously there are ways to circumvent this, such as ensuring that Innovation teams report directly into the CEO, and using the subsequent hierarchical power to build their reputation, and its associated freedom to innovate. However, innovation teams’ reporting lines would need a blog of its own to fully explore.

Harvard Business Review, published a seminal article that divided innovation up into Core, Adjacent and Transformational (see right, with some additions to the original HBR diagram). They found that different industries, and companies with different levels of maturity, would find a different mix between these 3 types appropriate. However, if an innovation arm still needs to build its reputation then it may well be advised to more heavily weight core innovation, and then, as their reputation for delivering value grows, they can move towards a higher proportion of adjacent and transformational innovation types.

Take-outs:

  • Garage style innovation may not be appropriate for corporates.
  • Clearly define what you’re wanting to innovate (part of a value-chain or long-shot), and choose the appropriate proximity based on your intentions. Take note of corporate culture here too- turkeys won’t vote for Christmas.
  • Consider your innovation arm’s internal reputation in selecting your innovation portfolio.

by Brad Carter

Three Spheres: Science, Design and Engineering

In the world of finance, the Foundery stands out as a pioneering challenger to the traditional financial institution – think suits, three-letter acronyms and legacy software housed in massive, skyline-dominating buildings. Although the Foundery isn’t alone in this endeavour, the digital financial organisation is still in its earliest days and there are many unanswered questions and unsolved challenges that lie ahead. This is the nature of the challenge that the Foundery has accepted: there will be no obvious answers or solutions.

http://www.symmetrymagazine.org/article/universe-steps-on-the-gas

In the world of finance, the Foundery stands out as a pioneering challenger to the traditional financial institution – think suits, three-letter acronyms and legacy software housed in massive, skyline-dominating buildings. Although the Foundery isn’t alone in this endeavour, the digital financial organisation is still in its earliest days and there are many unanswered questions and unsolved challenges that lie ahead. This is the nature of the challenge that the Foundery has accepted: there will be no obvious answers or solutions.

The key to success, however, is to recognise that with uncertainty comes opportunity – the opportunity to break new technological ground and seek new digital pathways that will one day reshape the world of finance.

This blogpost, however, isn’t about those challenges. Rather it is about the pioneering spirit, embodied by three overlapping spheres of innovation: science, design and engineering.

Science

We understand science as both the body of knowledge and the process by which we try to understand the world. Science is humanity’s attempt to organise the entire universe into testable theories from which we can make predictions about the world.

Here the universe is taken to include the natural world – such as physics

and biology – the social world – such as economics and linguistics – and

the abstract world, such as mathematics and computer science  [link].

If the goal of science is to formulate testable theories from which we can make predictions, how does it relate to the Foundery’s challenge of transforming the world of banking?

Science is the sphere that embodies the process of discovery. It is curiosity coupled with the discipline to establish truths and meaning in the world in which we live – including the world of digital disruption which the Foundery inhabits.

The pioneering spirit requires not only the curiosity to break new ground, but also a special kind of scientific curiosity to turn this new ground into groundbreaking discoveries.

Design

Design is the conceptual configuration of an idea, process or object. It is understood as the formulation of both the aesthetic and functional specifications of the object, idea or process.

To put it more simply in the words of the late Steve Jobs, arguably one of the most significant pioneers of the 21st century:

“Design is not just what it looks and feels like. Design is how it works.”

Whereas science is concerned with trying to understand the world that humanity occupies, design is concerned with the things – objects, ideas and processes – which humanity adds to the world, and how they look and how they work.

At the Foundery, the pioneering spirit is more than just breaking new ground: it is the creation of accessible pathways, including new solutions and disruptive technologies. Design is the process of creating new solutions – not just planning and configuring what these solutions are, but experimenting with how they look and work.

Thus design is the sphere which embodies experimentation. It is the courage to try something new, unencumbered by the fear of failure. It is the willpower to try over and over again until something great can be achieved.

Engineering

Engineering is the application of science to solve problems in the real world. At one level engineering is the intersection of science and design – combining scientific knowledge with principles from design – but taken on the whole engineering is more that: it encompasses the design, control and scaling of constructive and systematic solutions to real world problems.

In the past engineering was typically associated with physical systems such as chemical processes and mechanical engines. In today’s technological age, we also associate engineering with abstract information systems and computer programmes.

Now financial institutions can be viewed as massive, highly complex and highly specialised information systems. So from this perspective, one part of the Foundery’s task is to engineer the processes, interfaces and information networks of the bank of the future.

Engineering is the sphere which embodies problem solving. It is one thing to break new ground and make new discoveries and experiment with new solutions, but something else entirely to translate the pioneering spirit into technologies and systems with the potential to change the world.

Bringing the Spheres Together

On their own, science, design and engineering represent different aspects of the creation process: science is the process of discovery, design is the process of experimentation and refinement and engineering is the process of problem solving. But this view alone suggests that there is a linear order to the creation process: that each process must take place in phases.

This isn’t my view and certainly isn’t the aim of this blogpost. Rather, my interpretation of science, design and engineering is that they are abstract, multi-dimensional spheres which embody the creative process. They are self-contained concepts which exist in their own right, but with clear points of intersection which link science, design and engineering. Together they are a whole which is greater than the sum of its parts.

Whether it is the blockchain exchange, the novel application of machine learning to existing financial services or even our partnership-based organisational structure, science, design and engineering are very much at the Foundery’s core. These three spheres embody the pioneering spirit which drives our purpose: from the curiosity to explore more, to the courage to try more and the resolve to do more.

by Jonathan Sinai

 

 

 

 

 

The Dimensions Of An Effective Data Science Team

Organisations worldwide are increasingly looking to data science teams to provide business insight, understand customer behaviour and drive new product development. The broad field of Artificial Intelligence (AI) including Machine Learning (ML) and Deep Learning (DL) is exploding both in terms of academic research and business implementation. Some of the world’s biggest companies including Google, Facebook, Uber, Airbnb, and Goldman Sachs derive much of their value from data science effectiveness. These companies use data in very creative ways and are able to generate massive amounts of competitive advantage and business insight through the effective use of data.

https://static1.squarespace.com/static/5193ac7de4b0f3c8853ae813/5194e45be4b0dc6d4010952e/55ba8a68e4b0aac11e3339cd/1438288490143//img.jpg

The Need for Data Science

Organisations worldwide are increasingly looking to data science teams to provide business insight, understand customer behaviour and drive new product development. The broad field of Artificial Intelligence (AI) including Machine Learning (ML) and Deep Learning (DL) is exploding both in terms of academic research and business implementation. Some of the world’s biggest companies including Google, Facebook, Uber, Airbnb, and Goldman Sachs derive much of their value from data science effectiveness. These companies use data in very creative ways and are able to generate massive amounts of competitive advantage and business insight through the effective use of data.

Have you ever wondered how Google Maps predicts traffic? How does Facebook know your preferences so accurately? Why would Google give a platform as powerful as Gmail away for free? Having data and a great idea is a start – but the likes of Facebook’s and Google’s have figured out that a key step in the creation of amazing data products (and the resultant business value generation) is the formation of highly effective, aligned and organisationally-supported data science teams.

Effective Data Science Teams

How exactly have these leading data companies of the world established effective data science teams? What skills are required and what technologies have they employed? What processes do they have in place to enable effective data science? What cultures, behaviours and habits have been embraced by their staff and how have they set up their data science teams for success? The focus of this blog is to better understand at a high level what makes up an effective data science team and to discuss some practical steps to consider. This blog also poses several open-ended questions worth thinking about. Later blogs in this series will go into more detail in each of the dimensions discussed below.

Drew Harry, Director of Science at Twitch wrote an excellent article titled “Highly Effective Data Science teams”. He states that “Great data science work is built on a hierarchy of basic needs: powerful data infrastructure that is well maintained, protection from ad-hoc distractions, high-quality data, strong team research processes, and access to open-minded decision-makers with high leverage problems to solve” [1].

In my opinion, this definition accurately describes the various dimensions that are necessary for data science teams to be effective. As such, I would like to attempt to decompose this quote further and try to understand it in more detail.

Drew Harry’s Hierarchy of Basic Data Science Needs

Great data science requires powerful data infrastructure

A common pitfall of data science teams is that they are sometimes forced either through lack of resources or through lack of understanding of the role of data scientists, to do time-intensive data wrangling activities (sourcing, cleaning, preparing data). Additionally, data scientists are often asked to complete ad-hoc requests and build business intelligence reports. These tasks should ideally be removed from the responsibilities of a data science team to allow them to focus on their core capabilities: that is utilising their mathematical and statistical abilities to solve challenging business problems and find interesting patterns in data rather than expending their efforts on housekeeping work. To do this, ideally data scientists should be supported by a dedicated team of data engineers. Data engineers typically build robust data infrastructures and architectures, implement tools to assist with data acquisition, data modeling, ETL, data architecture etc.

https://sg-dae.kxcdn.com/blog/wp-content/uploads/2014/01/managerial-skills-hallmarks-great-leaders.jpg

An example of this is at Facebook, a world leader in data engineering. Just imagine for a second the technical challenges inherent in providing over one billion people a personalised homepage full of various posts, photos and videos on a near-real time basis. To do this, Facebook runs one of the world’s largest data warehouses storing over 300 petabytes of data [2] and employs a range of powerful and sophisticated data processing techniques and tools [3]. This data engineering capability enables thousands of Facebook employees to effectively use their data to focus on value enhancing activities for the company without worrying about the nuts and bolts of how the data got there.

I realise that we are not all blessed with the resources and data talent inherent in Silicon Valley firms such as Facebook. Our data landscapes are often siloed and our IT support teams where data engineers traditionally reside mainly focus on keeping the lights on and putting out fires. But this model has to change – set up your data science teams to have the best chance of success. Co-opt a data engineer onto the data science team. If this is not possible due to resource constraints then at least provide your data scientists with the tools to easily create ETL code and rapidly spin up bespoke data warehouses thus enabling them with rapid experimentation execution capability. Whatever you do, don’t let them be bogged down in operational data sludge.

Great data science requires easily accessible, high-quality data

https://gcn.com/~/media/GIG/GCN/Redesign/Articles/2015/May/datascience.png

Data should be trusted, and be of a high quality. Additionally, there should be enough data available to allow data scientists to be able to execute experiments. Data should be easily accessible, and the team should have processing power capable of running complex code in reasonable time frames. Data scientists should, within legal boundaries, have easy, autonomous, access to data. Data science teams should not be precluded from the use of data on production systems and mechanisms need to be put in place to allow for this rather than being banned from use just because “hey – this is production – don’t you dare touch!”

In order to support their army of business users and data scientists, eBay, one of the world’s largest auction and shopping sites, has successfully implemented a data analytics sandbox environment separate from the company’s production systems. eBay allows employees that want to analyse and explore data to create large virtual data marts inside their data warehouse. These sandboxes are walled off areas that offer a safe environment for data scientists to experiment with both internal data from the organisation as well as providing them with the ability to ingest other types of external data sources.

I would encourage you to explore the creation of such environments in your own organisations in order to provide your data science teams with easily accessible, high quality data that does not threaten production systems. It must be noted that to support this kind of environment, your data architecture must allow for the integration of all of the organisation’s (and other external) data – both structured and unstructured. As an example, eBay has an integrated data architecture that comprises of an enterprise data warehouse that stores transactional data, a separate Teradata deep storage data base which stores semi-structured data as well as a Hadoop implementation for unstructured data [4]. Other organisations are creating “data lakes” that allow raw, structured and unstructured data to be stored in a vast, low-cost data stores. The point is that the creation of such integrated data environments goes hand in hand with providing your data science team with analytics sandbox environments. As an aside, all the efforts going into your data management and data compliance projects will also greatly assist in this regard.

Great data science requires access to open-minded decision-makers with high leverage problems to solve

https://www.illoz.com/group_articles_images/3248184859.jpg

DJ Patel stated that “A data-driven organisation acquires, processes, and leverages data in a timely fashion to create efficiencies, iterate on and develop new products, and navigate the competitive landscape” [5]. This culture of being data-driven needs to be driven from the top down. As an example, Airbnb promotes a data-driven culture and uses data as a vital input in their decision-making process [6]. They use analytics in their everyday operations, conduct experiments to test various hypotheses, and build statistical models to generate business insights to great success.

Data science initiatives should always be supported by top-level organisational decision-makers. These leaders must be able to articulate the value that data science has brought to their business [1]. Wherever possible, co-create analytics solutions with your key business stakeholders.  Make them your product owners and provide feedback on insights to them on a regular basis. This will help keep the business context front of mind and allows them to experience the power and value of data science directly. Organisational decision-makers will also have the deepest understanding of company strategy and performance and can thus direct data science efforts to problems with the highest business impact.

Great data science requires strong team research processes

Data science teams should have strong operational research capabilities and robust internal processes. This will enable the team to be able to execute controlled experiments with high levels of confidence in their results. Effective internal processes can assist in promoting a culture of being able to fail fast, fail quickly and provide valuable feedback into the business experiment/data science loop. Google and Facebook have mastered this in their ability to amongst other things; aggregate vast quantities of anonymised data, conduct rapid experiments and share these insights internally with their partners thus generating substantial revenues in the process.

Think of this as employing robust software engineering principles to your data science practice. Ensure that your documentation is up to date and of a high standard. Ensure that there is a process for code review, and that you are able to correctly interpret the results that you are seeing in the data. Test the impact of this analysis with your key stakeholders. As Drew Harry states, “controlled experimentation is the most critical tool in data science’s arsenal and a team that doesn’t make regular use of it is doing something wrong” [1].

In Closing

This blog is based on a decomposition of Drew Harry’s definition of what enables great data science teams. It provides a few examples of companies doing this well and some practical steps and open-ended questions to consider.

To summarise: A well-balanced and effective data science team requires a data engineering team to support them from a data infrastructure and architecture perspective. They require large amounts of data that is accurate and trusted. They require data to be easily accessible and need some level of autonomy in accessing data. Top level decision makers need to buy into the value of data science and have an open mind when analysing the results of data science experiments. These leaders also need to be promoting a data-driven culture and provide the data science team with challenging and valuable business problems. Data science teams also need to keep their house clean and have adequate internal processes to execute accurate and effective experiments which will allow them to fail and learn quickly and ultimately become trusted business advisors.

Some Final Questions Worth Considering and Next Steps

In writing this, some intriguing questions come to mind: Surely there is an African context to consider here? What are we doing well on the African continent and how can we start becoming exporters of effective data science practices and talent. Other questions that come to mind include: To what end does all of the above need to be in place at once? What is the right mix of data scientists/engineers and analysts? What is the optimal mix of permanent, contractor and crowd-sourced resources (e.g. Kaggle-like initiatives [7])? Academia, consultancies and research houses are beating the drum of how important it is to be data-driven, but to what extent is this always necessary? Are there some problems that shouldn’t be using data as an input? Should we be purchasing external data to augment the internal data that we have, and if so, what data should we be purchasing? One of our competitors recently launched an advertising campaign explicitly stating that their customers are “more than just data” so does this imply that some sort of “data fatigue” is setting in for our clients?

My next blog will explore in more detail, the ideal skillsets required in a data engineering team and how data engineering can be practically implemented in an organisation’s data science strategy. I will also attempt to tackle some of the pertinent open-ended questions mentioned above.

The dimensions discussed in this blog are by no means exhaustive, and there are certainly more questions than answers at this stage. I would love to see your comments on how you may have seen data science being implemented effectively in your organisations or some vexing questions that you would like to discuss.

References

[1] https://medium.com/mit-media-lab/highly-effective-data-science-teams-e90bb13bb709

[2] https://blog.keen.io/architecture-of-giants-data-stacks-at-facebook-netflix-airbnb-and-pinterest-9b7cd881af54

[3] https://www.wired.com/2013/02/facebook-data-team/

[4] http://searchbusinessanalytics.techtarget.com/feature/Data-sandboxes-help-analysts-dig-deep-into-corporate-info

[5] https://books.google.co.za/books?id=wZHe0t4ZgWoC&printsec=frontcover#v=onepage&q&f=false

[6] https://medium.com/airbnb-engineering/data-infrastructure-at-airbnb-8adfb34f169c?s=keen-io

[7] https://www.kaggle.com/

by Nicholas Simigiannis

The Power of the Unconversation

On the 9th of March 2017 twelve enthusiastic Foundery members attended DevConf 2017, South Africa’s biggest community driven software development conference: an event that promised learning, inspiration and networking.

Courtesy of DevConf 2017 (devconf.co.za)

On the 9th of March 2017 twelve enthusiastic Foundery members attended DevConf 2017, South Africa’s biggest community driven software development conference: an event that promised learning, inspiration and networking.

With a multi-tracked event such as this one there is usually something for everyone, and yet if you speak to serial conference attendees (guilty as charged), the talks aren’t the greatest reason to attend.

People like me go to conferences in part for the scheduled content, but mostly for the unscheduled conversations in the passage en route to a talk or around a cocktail table during a break. The “unconversations”, I’m calling them. It’s the conference equivalent of another well-known creative outlet: “water cooler conversations”.

I’ll admit that I’m a bit of a conference butterfly – actively seeking out these “unconversations” so that I can join them. I especially take note as crowds disappear into conference rooms. I’m drawn to the groups of people who stay behind wherever they might have gathered. That’s where I’m almost guaranteed to participate in really interesting discussions and learn something new. When I attend conferences, it’s this organic and informal style of collaborative enquiry I look forward to the most.

Courtesy of DevConf 2017 (devconf.co.za)

Ironically it was one of the DevConf talks that helped me understand why these “unconversations” tend to work so well as creative spaces. In his talk on Mob Programming, Mark Pearl mentioned a study conducted by the American Psychological Association which established that groups of 3-5 people perform better on complex problem solving than the smartest person in the group could perform on their own. See “references” for more information.

Loosely translated, a group of people has a better shot of solving a complex problem together than if they tried to solve it independently.

As a Mob Programming enthusiast myself, this makes complete sense to me. What’s interesting is that this research is not new, yet many organisations still discourage “expensive” group-work and continue to reward individual performance, and I can see why. For people with similar upbringings and educational backgrounds to mine, this is the comfort zone. We default to working alone and feel a sense of accomplishment when we achieve success individually. As children we were told to solve problems and find answers on our own. Receiving help was a sign of weakness, and copying was forbidden.

In contrast, the disruptive organisations of the last few decades encourage the complete opposite. These organisations recognise the value of problem-solving with groups of people who have varying, and even conflicting, perspectives. There’s no time for old-school mindsets that favour individual efforts over collaboration. We need to cheat where it’s appropriate by knowing who can help us and what existing ideas we can leverage.

I don’t mean to trivialise it. There’s a bit more involved than just creating opportunities for people to solve problems in groups. According to the book “Collective Genius”, innovative companies such as Google have developed three important organisational capabilities: creative abrasion (idea generation by encouraging conflict and high quality feedback), creative agility (hypothesizing, experimenting, learning and adapting) and creative resolution (deciding on a solution after taking new knowledge into account) all supported by a unique style of leadership. The case studies are incredibly motivating.

Since joining the Foundery I’m discovering that we are practicing these things every day, and the amazing ideas and products born from our “collective genius” serve as confirmation that we’re on the right track. Is it always easy? No, absolutely not. It’s requires a great deal of mindfulness.

When I’m reflective I notice that the greatest ideas and most creative solutions I’ve brought to life were conceived with input from others. Many of the dots I connected for the first time happened during completely unlikely meetings of minds, and some through passionate differences of opinion. In an environment that calls for constant collaboration, it’s wonderfully refreshing to find that the “unconversations” I enjoy so much are happening all around me, every day.

And so long as I’m participating, I am always reminded that together we are more capable of solving really complex problems than the smartest one among us, and I’m becoming more and more OK with that.

References:

By Candice Mesk

 

The Doosra

Working in an investment bank over the past decade has provided the opportunity for many interesting conversations around what the value to society of an investment bank represents. Often the model of a “zero sum game” is proposed which suggests that finance often doesn’t add much – in terms of the transactions that banks facilitate, someone is a winner and someone else is the loser, there is no net gain to the world. Other purists would argue something along the lines of efficient allocation of resources. That initially sounded a bit too creative for my more linear reasoning, but after years in the trenches, it has developed an intuitive ring of truth to it.

Working in an investment bank over the past decade has provided the opportunity for many interesting conversations around what the value to society of an investment bank represents. Often the model of a “zero sum game” is proposed which suggests that finance often doesn’t add much – in terms of the transactions that banks facilitate, someone is a winner and someone else is the loser, there is no net gain to the world. Other purists would argue something along the lines of efficient allocation of resources. That initially sounded a bit too creative for my more linear reasoning, but after years in the trenches, it has developed an intuitive ring of truth to it.

Similarly, digital disruption suffers a questionable motive. For some enterprises, such as Uber, it may appear that the shiny plaything of some young geeks on the west coast of america has been allowed to plough through the livelihoods of real people with real jobs and families around the world. When applying such thinking to digital disruption in the realm of investment banking, the question arises as to whether there is any real value that this rather obscure digital offspring of an already often questioned enterprise can produce.

At times this line of thinking led me to check my own passion for this “new vector of commerce”. How do I ensure that my natural fascination with some “new and shiny” geek toy is not diverting what should be a cold, objective application of technology to investment banking, rather than being an excuse to pursue disruption for its own sake. How do we ensure a golden thread of validity and meaning to this exercise.

I started thinking about Google, and how I could justify what value they might have brought to the world (and not just their shareholders). I won’t pretend that I spent much time on this question, but I did come to the following example. Google maps is a fantastic application, and I probably initially loved it more for the fact that in this we have an application that is bringing the real world (travel, maps, my phone, my car) together with the digital world (the internet, GPS technology, cloud based algorithms).

However, it is a tool that many people use, and its value extends beyond that initial fascination. I have considered that in a very real way there are likely to be hundreds of millions of people that might use google maps every day to guide them on an optimal route in their cars. And, true to form, it manages to do this: either by advising detours around potential traffic jams, or by merely showing quicker routes that save time.

That extra time in traffic that has been avoided represents a very real saving in carbon emissions into the atmosphere, and real energy that would have been wasted pumping cylinders up and down in an idling vehicle. This is not a zero sum equation where google benefits and many small companies lose out. This is a very real benefit to the world where increased efficiency reduces the amount of wasted energy, and wasted time of humans. This is a net positive game to the world. In some respect the world of humans win, and the domain of entropy loses – if we are forced to put a name to it.

Personally I would feel deeply gratified if I could produce such a result that created a new benefit to either the world, or at the very least some small piece of it.

Interestingly enough, this speaks to an underlying theme which appeals to many people that are attracted to incubators of disruption, such as the Foundery. Many people do really feel that they would like to be part of something that changes the world. Perhaps this is because such incubators invoke the perceived “spirit” of Google, Facebook and other silicon valley heroes as an inspirational rally cry. I believe that the example of google maps does show that the present opportunity of disruptive technology can represent a possibility for such very real efficiencies and benefits to be created. Perhaps those seemingly naive passions that are stirred in the incubatees are valid, and should be released to find their form in the world.

So how do we harness this latent energy? Where do we direct it for the best chance of success?

Some of the technologies to be harnessed, and which represent the opportunity of disruptive technology:

  1. IoT (the internet of things):

At its most simple, this means that various electronic components have become sufficiently small, powerful and most importantly, cheap. It can become possible and economically viable to monitor the temperature, humidity, soil hydration of every single plant in a field of a farm. To measure the status of every machine on a production line in a small factory in the east rand, without bankrupting the owner with implementation costs.

Apart from sensors, there are actuators in the world such as smart locks, smart lights and the smart home which enable real-world actions to be driven and controlled from the internet. Together these provide the mechanism for the real world to be accessible to the digital world.

This extends beyond the “real“ real world: there are changes at play, not too far under the surface of the modern financial system, that are turning the real world of financial “things” (shares, bonds, financial contracts) into the internet world of financial “things” (dematerialised and digitised shares, bonds online, financial contracts online).

There are also actuators in this world, such as electronic trading venues and platforms which enable manipulation of digital financial contracts by digital actors of finance.

  1. Data is free:

The cost per megabyte of storage continues to drop exponentially, and online providers are able to offer services on a rental basis that would have been inconceivable a decade ago. The ubiquity of cheap and fast bandwidth enables this even more so.

  1. Computation is cheaper than ever, and simple to locate with cloud based infrastructure:

Moore’s law continues unabated, providing computational power that drops in cost by the day. Notwithstanding the promise of quantum computing which seems around the corner

  1. The technologies to utilize are powerful, free and easy to learn:

If you have not yet done so, have a sojourn on the internet across such topics as python, tensorflow, quandl, airflow and github. These represent free, open-source (largely) capabilities to harness the technologies above and make them your plaything. Not only that, the amount of free resources “out there” which can help you master each of these is astounding.

A brief exercise into trying to automate my house using python has revealed hundreds of youtube videos of similarly obsessed crazies presenting fantastic applications of python to automating everything from their garage doors, fishtanks, pool chlorine management systems, alarms etc. These youtube videos are short, to the point, educational, free and most importantly crowd moderated – all the other python home automation geeks have ensured that all the very good videos are upvoted and easily found; and the least fit are doomed to obscurity.

This represents another perhaps unforeseen benefit of the internet which is crowd-sourced, crowd-moderated, efficient and specific education. JIT learning (“just in time learning”) which means being able to learn everything that you need to accomplish a task five minutes before you need to solve it, and perhaps to forget everything almost immediately once you have solved it…. (That is an interesting paradigm to counter traditional education).

( P.S. if you have kids, or want to learn other stuff, checkout https://www.khanacademy.org/ )

Given the above points, it has never been easier for someone to create a capability to source information in real time from the real world, store that information online, apply unheard of computing power to that information using new, powerful and easy programming languages which can be learned online in a short period of time.

It might be a moot point that is valid at every point in time in every generation, but it has never been easier and cheaper to try out an idea online and see if it has legs.

So we have identified people with passion, a means of delivery and so now … what?

Those of you that are paying attention would realise that I have skirted the question of whether we have added any real value to the world, or feel that we can? Time will tell, and I would hate to let the cat out of the bag too early. But there is one thing that is true: if you are one of those misguided, geek-friendly, meaning-seeking, after hours change agents, or if you have an idea that could change the world, come and talk to us … the door is always open.

by Glenn Brickhill

Coding DevSecOps

Typically, IT policies cover many aspects of the technology landscape – including security. These policies are written in elaborate documents and then are stored somewhere cryptic. Finding these policies is very often a challenge.

The Enterprise Problem

Typically, IT policies cover many aspects of the technology landscape – including security. These policies are written in elaborate documents and then are stored somewhere cryptic. Finding these policies is very often a challenge.

Then we hire experts to come in and manually run penetration tests against the environment which gives us measurement feedback on both compliance and vulnerabilities.

We then take these manually generated penetration test reports and ask people in our organizations to take the results and remediate according to the test findings.

While all this is happening we make policy changes to current policies, we also bring in new software into the environment to satisfy new business requirements. At the same time new threats appear.

Each part of the process can span months, meaning this whole cycle may take multiple months.

We need to close the window, and change the time reference from months, to days to hours and ideally, real-time to match the timeframes of potential hackers. This is DevSecOps and I’m going to tell you how to do it.

The Road to DevSecOps

We’ve framed the enterprise problem, now how do we apply DevSecOps to it? Well the answer is to delve a little more into DevSecOps.

DevSecOps = DevOps + Security ( Sec )

In the world of DevSecOps as you may predict we have three teams working together. Development, the Security team and Operations.

The “Sec” of DevSecOps introduces process changes to the following elements of an organization:

  • Engineering
  • Operations
  • Data Science
  • Compliance

This may seem a little daunting, let’s unpack these changes.

Engineering & Operations

Engineering refers to how you build with security in mind and bring security into your engineering pipeline. A typical engineering pipeline shown below:

As we practically observe code eating the world the engineering pipeline for the development, security & operations build team looks very similar. Coding best practices apply to all. Everyone needs to change the way that they think. We are no longer working in silos but rather working together in a well co-ordinated and harmonious manner.

Development team

  • Writes the system code with security in mind
  • There are changes to the engineering pipeline (policies & practices), most notably static code analysis using SonarQube that looks for security vulnerabilities

Operations team

  • Writes Puppet code to manage infrastructure state to application layer as well as comply against OpenSCAP policies
  • Static code analysis by means of PuppetLint

Security team

  • Experiments, automates and tests new security approaches and creates Puppet modules

Security operations

  • Continues to detect, hunt and then contain threats
  • Writes OpenSCAP policies that aligns to IT policy

Some examples of the parallels:

>> Static Code Analysis: Talk about SonarQube for Developers and PuppetLint for Operations

>> Automated unit test: Talk about Beaker for Operations and xUnit for Development

With this convergence there is no reason not to predict that the security team will soon follow similar practices when the toolsets reach the right levels of maturity.

 Data Science & Compliance

Once you start collecting data you can apply reverse looking analytics and forward looking data science approaches. The data collected from DevSecOps can be used to augment already well

established security data. In particular Puppet by its nature enforces a specific state, if this state changes without sanction these events can be used as ‘trip wires’ to detect potential intruders.

Measurement gives you compliance measurement feedback against your policies. This occurs at the cadence that you configure Puppet which defaults to 30 minutes.

To Conclude

In the new world – instead of having IT policies as documents, we codify them. Sec writes the policies and then Dev & Ops work together to write the remediation code.

Measurement moves from a manual state to an automated state. We write the policy code in OpenSCAP, remediate the policy breaches with Puppet code we have written. Threats, environment change and policy change occur as we expect.

The difference in the DevSecOps world is that we update and the policy & remediation code in minutes, we then rollout to the organization in hours.

That’s the true power of DevSecOps!

DevSecOps talks to how the principles of DevOps can be applied to the broader security context.

The path to building a culture of security in an organization needs to follow a similar path to that of DevOps: set the right expectations of outcome then empower and measure.

In this world of security challenges, can you afford not to do DevSecOps?

The Future

Look out for my next post on how to apply DevSecOps in a containerized environment!

by Jason Suttie

The essence of a FinTech team

Along my short career I find myself wondering what the keys to success are. I have come to the realization that though the media will tell us stories of successful individuals, few key inventions were conceptualized and industrialized by just one person. So what makes a successful team and how would you put one together?

Along my short career I find myself wondering what the keys to success are. I have come to the realization that though the media will tell us stories of successful individuals, few key inventions were conceptualized and industrialized by just one person. So what makes a successful team and how would you put one together?

The idealist within me wishes that I could provide a recipe for the ideal FinTech team. I would like to be able to say in order to revolutionize the world you need 5 analysts, 10 developers and 17 Data scientists but this still wouldn’t guarantee success. So what is the essence of a Fintech team? I may not have all the answers but I do think there are some common elements in truly successful teams.

Purpose

The word purpose is over used but misunderstood. The true meaning of the word took on a new meaning when described by Viktor Frankl in his 1946 classic, “Man’s Search for Meaning” within the context of a World War 2 prisoner camp. Viktor was a neurologist and psychiatrist who was captured and lived in a prisoner of war camp. He shares his observation on the elements of motivation and depression that he observed in his fellow prisoners.  Personally I think Viktor does a better job at explaining it than I could.

Viktor explains that the reason people survived the Holocaust is they had something else to live for, a true purpose. Sometimes this was as simple as a desire to see their family again, in other cases it was more complex. It is this motivation by purpose is that I believe galvanizes a team.

Salim Ismail insists that all start-ups set a multi transformational purpose. These purpose statements need so be short and to the point so that there is no room for misinterpretation. If your purpose cannot be stated in one sentence, then it has not been distilled into its essence. This helps focus all team members at the same goal. Most importantly it means that all team members should believe in the purpose. Getting this right is almost impossible but I would be willing to bet that successful teams have gotten this right. My memory takes me back to South Africa’s 1995 Rugby World Cup winning team who went through the entire tournament with the purpose statement of “one team, one nation.” A purpose that resonates so strongly in all individuals within the team makes it impossible to fail.

http://www.sport24.co.za/Rugby/Springbok-Heritage/1995-RWC-squad-honoured-for-greatest-day-in-SA-rugby-history-20150624

People

I was in awe of these start-up stories outlining how a group of people started a multi-billion-dollar company in their garage.  In the past few years I found myself in the proverbial garage of multiple different acquaintances and friends, it was only then that I realized what was driving this behavior. I found myself drawn to this group merely because we were enjoying the hard work and the time we were spending with each other. It is easier to accomplish a complicated and long goal when you have good people around you that you connect with. I’m not at all saying that you need to be best friends with all your team members but I do believe that you need to find some commonality to have a human connection.

What about skills?

I’m am by no means diminishing the need for skilled people in your team. I am however making an assertion that even if you have the best skills, without a purpose and connected team you are doomed to fail. Pay more attention to the qualitative things when setting up the team. The things we take for granted like the feeling when you walk through the office doors, the vibe in the room, the “nice to have” social interactions.

So I guess my recipe is this:

Find a purpose that resonates with you. Then find a group of people that you can connect with. If the purpose resonates with your team, I believe you have a good chance of success.

by Tyrone Naidoo

Link to video: https://www.youtube.com/watch?v=fD1512_XJEw