Releasing Software Developer Superpowers

Article is aimed at anyone looking to gain the edge in their software development team creation or advancement in the digital age. Concepts can be applied outside of sw dev at some level. Open to discussion – views are my own.

UX is not just for Customers

User Experience is an ever growing component of product development, with creating user centric design paradigms to ensure that personalisation and consumer/market fit is achieved. From a development team view, leveraging some of the user experience concepts in how they work can achieve operational efficiency, to accelerate product development. For example, how is the experience for each of the developer personnas in your team? How do their days translate to user stories? Can interviewing the development community lead to creating better features for your development culture?

Build Products not Technology

Super important. Sometimes with developers, there is an over emphasis on the importance of building features, a lot of the time for features sake. By keeping the lens on the value or “job to be done” for the customer in the delivery of a product at all times can ensure you are building what is truly needed by your customer. To do this, select and leverage a series of metrics to measure value for that product, along with keeping your product developent in series, and tightly coupled to your customer experience development.

Leverage PaaS to deliver SaaS

This sounds catching but its becoming the norm. 5 years ago, it took a developer a week of development time to do what you can do in Amazon Web Services or Azure now in minutes. This has led to a paradigm shift, where you being to look at the various platforms and tools that are available to enable the developers to deliver great products to customers. Of course, there will always be custom development apps, but you can help your developers by getting them the right toolkit. There is no point reinventing the wheel when OTS open source components are sitting there, right? Products like Docker and Spring and concepts like DevOps are bringing huge value to organisations, enabling the delivery of software or microservices at enhanced speed. Also, the balance between buying OTS and building custom is a careful decision at product and strategic levels.

“The role of a developer is evolving to one like a top chef, where all the ingredients and tools are available, its just getting the recipe right to deliver beautiful products to your customer.”

Create Lean Ninjas!

shutterstock_215389786 - Copy

Evolving the cultural mindset of developers and the organisation toward agile development is super important. Having critical mass of development resources, plus defined agile processes to deliver business success  can really reshape how your organisation into one where value creation in a rapid manner can take place. However, its important to perform ethnographical studies on the organisation to assess the culture. This can help decide on which agile frameworks and practices (kanban, scrum, xp etc) can work best to evolve the development life cycle.

Implement the 10% rule

Could be slightly controversial, and can be hard to do. Developers should aim to spend 10% of their time looking at the new. The new technologies, development practices, company direction, conferences, training. Otherwise you will have a siloed mis-skilled pool of superheros with their powers bottled.

However, with lean ninjas and effective agile company wide processes, resources and time can be closely aligned to exact projects and avoid injecting randomness into the development lifecycle. Developers need time to immerse and focus. If you cant do that for them, or continously distract them with mistimed requests – they will leave. If you can enable them 10% is achievable.

Risk Awareness

shutterstock_331041884 (Large)

We are seeing an evolution in threats to enterprise all over the world, and in a software driven and defined world, getting developers to have security inherent design practices prior to products hitting the market can help protect companies. Moons ago, everything sat on prem. The demands of consumers mean a myriad of cloud deployed services are adding to a complex technology footprint globally. If they know the risk landscape metrics from where they deploy, they can act accordingly. Naturally, lining them up with business leaders on compliance and security can also help on the educational pathway.

Business and Technology Convergence

We are beginning to see not only evolution in development practices –  we are also seeing a new type of convergance (brought about by lean agile and other methods) where business roles and technology roles are converging. We are beginning to see business analysts and UX people directly positioned into development teams to represent the customer and change the mindset. We are seeing technology roles being positioned directly into business services teams like HR and finance. This is impacting culture, wherby the saviness in both directions needs to be embraced and developed.

shutterstock_334013903 (Large)

Growth Mindset

We mentioned mindset a lot in the article. That because its hugely important. Having the right culture and mindset can make all the difference in team success. As Carol Dweck talks about in her book “Mindset”, you can broadly categorise them into two – growth and fixed. This can be applied in all walks of life, but for team building it can be critical.

In a fixed mindset students believe their basic abilities, their intelligence, their talents, are just fixed traits. They have a certain amount and that’s that, and then their goal becomes to look smart all the time and never look dumb. In a growth mindset students understand that their talents and abilities can be developed through effort, good teaching and persistence. They don’t necessarily think everyone’s the same or anyone can be Einstein, but they believe everyone can get smarter if they work at it.

Creating a team where being on a growth curve and failures are seen as learning can really enable a brilliant culture. As Michaelangelo said “I am still learning”. Especially as we evolve to six generations of developers. How do we ensure we are creating and mentoring the next set of leaders from interns through to experienced people?

Check a Ted talk from Carol here – link.

And most importantly … HAVE FUN!

Numenta and MemComputing: Perfect AI Synergy

the-brain

Let’s look at two forces of attraction that are happening in the technology space, specifically looking at creating true artificial intelligent systems, utilizing advances in both software and hardware technologies.

For years, even decades we have chased it. AI has been at the top of any list of research interest groups, and while there have been some advances, the pertinent challenge has been that advances in hardware electronics in the 70’s and 80’s occurred, software design was lagging behind. Then, software advanced incredibly in the past decade. So now, in July 2015, we reach a key point of intersection of two “brain based technologies”, which could be built together in a way that may lead to “true AI”.

At no other point in history have we had both hardware and software technologies that can “learn” like we can, whose design is based on how our mind functions.

Numenta

First, let’s look at Numenta. Apart from having the pleasure of reading Jeff Hawkins excellent book “On Intelligence”, I have started to look at all the open source AI algorithms ( github here) that they provide. In a journey that start nine years ago, when Jeff Hawkins and Donna Dubinsky started Numenta, the plan was to create software that was modeled on the way our human brain processes information. Whilst its been a long journey, the California based startup have made accelerated progress lately.

numenta-icon512

Hawkins, the creator of the original Palm Pilot, is the brain expert and co-author of the 2004 book “On Intelligence.” Dubinsky and Hawkins met during their time building Handspring, they pulled together again in 2005 with researcher Dileep George to start Numenta. The company is dedicated to reproducing the processing power of the human brain, and it shipped its first product, Grok, earlier this year to detect odd patterns in information technology systems. Those anomalies may signal a problem in a computer server, and detecting the problems early could save time, money or both. (Think power efficiency in servers)

You might think, hmm, that’s not anything great for a first application of algorithms based on the mind, but its what we actually started doing as neanderthals. Pattern recognition. First it was objects, then it was patterns of events. And so on. Numenta is built on Hawkins theory of Hierarchical Temporal Memory (HTM), about how the brain has layers of memory that store data in time sequences, which explains why we easily remember the words and music of a song. (Try this in your head. Try start a song in the middle.. Or the alphabet.. It takes a second longer to start it). HTM became the formulation for Numenta’s code base, called Cortical Learning Algorithm (CLA), which in turn forms the basis of applications such as Grok.

Still with me? Great. So that’s the software designed and built on the layers of the cortex of our brains. Now lets look at the hardware side.

 

Memcomputing

After reading this article on Scientific American recently, and at the same time as reading Hawkins book, I really began to see how these two technologies could meet somewhere, silicon up, algorithms down.

Memelements

A new computer prototype called a “memcomputer” works by mimicking the human brain, and could one day perform notoriously complex tasks like breaking codes, scientists say. These new, brain-inspired computing devices also could help neuroscientists better understand the workings of the human brain, researchers say.

In a conventional microchip, the processor, which executes computations, and the memory, which stores data, are separate entities. This constant transfer of data between the processor and the memory consumes energy and time, thus limiting the performance of standard computers.

In contrast, Massimiliano Di Ventra, a theoretical physicist at the University of California, San Diego, and his colleagues are building “memcomputers,” made up of “memprocessors,” that can actually store and process data. This setup mimics the neurons that make up the human brain, with each neuron serving as both the processor and the memory.

I wont go into specifics of the building blocks of how they are designed, but its based on three basic components of electronics – capacitors, resistors and inductors, or more aptly called memcapacitors, memresistors and meminductors. The paper describing this is here.

Di Ventra and his associates have built a prototype that are built from standard microelectronics. The scientists investigated a class of problems known as NP-complete. With this type of problem, a person may be able to quickly confirm whether any given solution may or may not work but can’t quickly find the best solution. One example of such a conundrum is the “traveling salesman problem,” in which someone is given a list of cities and asked to find the shortest route from a city that visits every other city exactly once and returns to the starting city. Finding the best solution is a brute force exercise.

The memprocessors in a memcomputer can work together to find every possible solution to such problems. If we work with this paradigm shift in computation, those problems that are notoriously difficult to solve with current computers can be solved more efficiently with memcomputers,” Di Ventra said. In addition, memcomputers could tackle problems that scientists are exploring with quantum computers, such as code breaking.

Imagine running software that is designed based on our minds, on hardware that is designed on our minds. Yikes!

In a future blog, I will discuss what this means in the context of the internet of things.

brain-computer

 

 

Distributed Analytics in IoT – Why Positioning is Key

analytics-word-cloud

The current global focus on the “Internet of Things (IoT)” have highlighted extreme importance of sensor-based intelligent and ubiquitous systems contributing to improving and introducing increased efficiency into our lives. There is a natural challenge in this, as the load on our networks and cloud infrastructures from a data perspective continues to increase. Velocity, variety and volume are attributes to consider when designed your IoT solution, and then it is necessary to design where and where the execution of analytical algorithms on the data sets should be placed.

Apart from classical data centers, there is a huge potential in looking at the various compute sources across the IoT landscape. We live in a world where compute is at every juncture, from us to our mobile phones, our sensor devices and gateways to our cars. Leveraging this normally idle compute is important in meeting the data analytics requirements in IoT. Future research will attempt to consider these challenges. There are three main classical architecture principles that can be applied to analytics. 1: Centralized 2: Decentralized and 3: Distributed.

The first, centralized is the most known and understood today. Pretty simple concept. Centralized compute across clusters of physical nodes is the landing zone (ingestion) for data coming from multiple locations. Data is thus in one place for analytics. By contrast, a decentralized architecture utilizes multiple big distributed clusters are hierarchically located in a tree like architecture. Consider the analogy where the leaves are close to the sources, can compute the data earlier or distribute the data more efficiently to perform the analysis. This can have some form of grouping applied to it, for example – per geographical location or some form of hierarchy setup to distribute the jobs.

Lastly, in a distributed architecture, which is the most suitable for devices in IoT, the compute is everywhere. Generally speaking, the further from centralized, the size of the compute decreases, right down to the silicon on the devices themselves. Therefore, it should be possible to push analytics tasks closer to the device. In that way, these analytics jobs can act as a sort of data filter and decision maker, to determine whether quick insight can be got from smaller data-sets at the edge or beyond, and whether or not to push the data to the cloud or discard. Naturally with this type of architecture, there are more constraints and requirements for effective network management, security and monitoring of not only the devices, but the traffic itself. It makes more sense to bring the computation power to the data, rather than the data to a centralized processing location. 

There is a direct relationship between the smartness of the devices and the selection and effectiveness of these three outlined architectures. As our silicon gets smarter and more powerful and efficient, this will mean that more and more compute will become available, which should result in the less strain on the cloud. As we distribute the compute, it should mean more resilience in our solutions, as there is no single point of failure.

In summary, the “Intelligent Infrastructures” now form the crux of the IoT paradigm. This means that there will be more choice for IoT practitioners to determine where they place their analytics jobs to ensure they are best utilizing the compute that is available, and ensuring they control the latency for faster response, to meet the real time requirements for the business metamorphosis that is ongoing.

EnterConf Belfast- Day 1

Firstly, to the quote of the day “We all have to avoid software that epically sucks”.

Me at the Insight Stage!
Me at the Insight Stage!

Today I attended day one of the Enter Conf in Belfast, which for those who don’t know it, is a spin off conference from Web Summit focused at the Enterprise aspect of our tech world. On initial entry, I must admit I was really proud of the Enter Conf team for choosing the venue. It had lost of lot of history associated with it, being in the heart of the titanic quarter where the Titanic was built, and for its time, was an “Enterprise ship”! This created a chilled out atmosphere which was a nice differential from the Web Summit to be held again in November. It was full of detailed and focused meetups and conversations, and did a great job at giving a different experience of what a conference can provide. Kudos.
There were two stages, named Center and Insights, with startup exhibits, food and coffee stands to ensure everyone was nicely refreshed throughout the day. Whilst I wont cover all talks, I have picked out a few to cover to show the types of elements being discussed.

The first one Ill mention was by Lukas Biewald of Crowdflower, entitled “Processing Open Data”, who spoke extensively on their efforts to clean up the data, and also looking into elements of data moderation. It really resonated with me as I am interested and developing data cleanse frameworks over the past number of years, and always struggle with the data pollution that skews our insight. Quote from Lukas “If you want to improve your algorithm, just add more data”. Lukas is in action below.

Lukas Biewald of Crowdflower
Lukas Biewald of Crowdflower

Stephen McKeown from AnalyticsEngines and Amir Orad from Sisense were also in a panel on “Democratising Data”, which focused the talk on ensuring companies of all sizes speed up their analytics creates a more level playing field for startups competing with Enterprise. Quote from this section “Bring data into your companies DNA”

Stephen McKeown and Amir Orad
Stephen McKeown and Amir Orad

There were a few familiar faces present, with my former EMC colleague and mentor Steve Todd amongst the speakers, on “Economic Value of Data” (check out Steve’s blog here for more fascinating content in this topic. Steve spoke on the Center stage, and it was great to see this topic present, as it really stood out as a conversation we all should be having. Steve gave a similar talk in Cork for an it@Cork event we organised in February, and it was great to see the advancement in Steve’s research in this area. Steve spoke on “Valuation Business Processes” and categories within that being M&A, Asset Valuation, Data Monetisation, Data Sale and Data Insurance. I wont spoil the rest, as I am sure Steve will blog on this soon.

Steve Todd speaking on Economic Value of Data
Steve Todd speaking on Economic Value of Data

Also on Center Stage, in one of talks to close out the evening, Barak Regev, Head of Google Cloud Platform – EMEA spoke on “Architecting the Cloud”. It was great to get an update on their vision, and Barak showed Googles vision to “Build Whats Next”

IMG_1526 - Copy
Barack Regev from Google – Build Whats Next

And to end on a great quote from to James Petter VP EMEA for Pure Storage – “Security should be like an onion, it should be layered, and you cant reach the center without breaching a layer”

The day brought many epic conversations from over 10 different nationalities, including a walk back to the city with the visionary Teemu Arina. His talk on Biohacking was incredible insightful. It spoke to the challenge around humans tracking their life through Self Quantisation. Teemu took me though his idea for how humans can do a better job on hacking their bodies for information and using that to improve life quality. Teemu’s book is here!

So now, its off to the night dinner, drink a beer two and to build a few more contacts! In the morning, it looks like a few good talks on Machine Intelligence will start the trend for another awesome day!

IoT meets Data Intelligence: Instant Chemistry

Even in the ideal world of a perfect network topology, a web of sensors, a security profile, a suitable data center design, and lots of applications for processing and analyzing, one thing is constant across all of these, the data itself. Data science is well talked about, and careers have been built from the concept. It is normally aimed at the low hanging fruit of a set of data, things that are easily measured. Science will take you so far, but it is data intelligence that will show the true value, with capability to predict impact from actions, and track this over time, to build modelling engines to solve future problems.

Even the data set is different for data intelligence as opposed to data science, which relies on lots and lots of data sets (Facebook, working out effectiveness of their changes/features etc). It is more complex, smaller even, and can be a data set contained in a single process or building.  Imagine a hospital’s set of machines producing live data to an analytics engine, and using historical models to compare live data to gauge risk to the patients? It can have real tangible benefit to life quality. Commonly called “Operational Intelligence”, the idea is to apply real time analytics to live data with very low latency. It’s all about creating that complete picture: historical data and models working with live data to provide a solution that can potentially transform all kinds of industry.

At the core of any system of this kind is decision making. Again, one must strive to make this as intelligent as possible. There are two types of decision making. The first is stagnant decision making and the second is dynamic decision making. With the assistance of mathematical models and algorithms, it will be possible for any IoT data set to analyze the further implications of alternative actions. As such, one would predict that efficiency of decision making would be increased.

At the IoT device level, there is scope to apply such a solution. Given the limited storage capacity on the devices themselves, a form of rolling deterministic algorithm that looks to analyse a set of sensor readings, and produce an output of whether or not to send a particular measurement to the intelligent gateway or cloud service.

Another proposed implementation on-device might be to use a deviation from correctness model, such as the Mahalanobis-Taguchi Method, which is an information pattern technology, which has been used in different diagnostic applications to help in making quantitative decisions by constructing a multivariate measurement scale using data analytic methods. In the MTS approach, Mahalanobis distance (MD, a multivariate measure) is used to measure the degree of abnormality of patterns and principles of Taguchi methods are used to evaluate accuracy of predictions based on the scale constructed. The advantage of MD is that it considers correlations between the variables, which are essential in pattern analysis. Given that it can be used on a relatively small data set, with the greater the number of historical samples the greater the model to compare it to, it could be utilized in the example of hospital diagnosis. Perhaps the clinician might need a quick on-device prediction around a patient’s measurement closeness to a sample set of recent hospital measurements?

Taking this one stage further, if we expanded this to multiple hospitals, could we start to think about creating linked data sets, that would be pooled together to extract intelligence. What if a weather storm is coming? Will it affect my town or house? Imagine if we could have sensors on each house, tracking the storm in real time and try to predict the trajectory and track direction changes and the service could then communicate directly with the home owners in the path.

With the premise of open source software, consider now the concept of open data sets, linked or not. Imagine if I was the CEO of a major company in oil and gas, and I was eager to learn from other companies in my sector, and in reverse allow them to learn from us through data sets. By tagging data by type (financial, statistical, online statistical, manufacturing, sales, for example) it allows a metadata search engine to be created, which can be then be used to gain industry wide insight at the click of a mouse. The tagging is critical, as the data is not then simply a format, but descriptive also.

Case Study: Waylay IoT and Artificial Intelligence11

Waylay, an online cloud native rules engine for any OEM maker, integrator or vendor of smart connected devices, proposes a strong link11 between IoT and Artificial Intelligence.

Waylay proposes a central concept for AI, called the rational agent. By definition, an agent is something that perceives its environment through sensors and acts accordingly via actuators. An example of this is a robot utilizes camera and sensor technology and performs an action i.e. “Move” depending on its immediate environment. (See figure 8 on next page).

To extend the role of an agent, a rational agent then does the right thing. The right thing might depend on what has happened and what is currently happening in the environment.

Figure 8: Agent and Environment Diagram for AI [11]
Figure 8: Agent and Environment Diagram for AI [11]
Typically, Waylay outlines that an agent consists of an architecture and logic. The architecture allows it to ingest sensor data, run the logic on the data and act upon the outcome.

Waylay has developed a cloud-based agent architecture that observes the environment via software-defined sensors and acts on its environment through software-defined actuators rather than physical devices. A software-defined-sensor can correspond not only to a physical sensor but can also represent social media data, location data, generic API information, etc.

Figure 9: Waylay Cloud Platform and Environment Design [11]
Figure 9: Waylay Cloud Platform and Environment Design [11]
For the logic, Waylay has chosen graph modeling technology, namely Bayesian networks, as the core logical component. Graph modeling is a powerful technology that provides flexibility to match the environmental conditions observed in IoT. Waylay exposes the complete agent as a Representational State Transfer (REST) service, which means the agent, sensors and actuators can be controlled from the outside, and the intelligent agent can be integrated as part of a bigger solution.

In summary, Waylay has developed a real-time decision making service for IoT applications. It is based on powerful artificial intelligence technology and its API-driven architecture makes it compatible with modern SaaS development practices.

End of Case Study 

Reference:

11: Waylay: Case study AI and IoT

http://www.waylay.io/when-iot-meets-artificial-intelligence/