Releasing Software Developer Superpowers

Article is aimed at anyone looking to gain the edge in their software development team creation or advancement in the digital age. Concepts can be applied outside of sw dev at some level. Open to discussion – views are my own.

UX is not just for Customers

User Experience is an ever growing component of product development, with creating user centric design paradigms to ensure that personalisation and consumer/market fit is achieved. From a development team view, leveraging some of the user experience concepts in how they work can achieve operational efficiency, to accelerate product development. For example, how is the experience for each of the developer personnas in your team? How do their days translate to user stories? Can interviewing the development community lead to creating better features for your development culture?

Build Products not Technology

Super important. Sometimes with developers, there is an over emphasis on the importance of building features, a lot of the time for features sake. By keeping the lens on the value or “job to be done” for the customer in the delivery of a product at all times can ensure you are building what is truly needed by your customer. To do this, select and leverage a series of metrics to measure value for that product, along with keeping your product developent in series, and tightly coupled to your customer experience development.

Leverage PaaS to deliver SaaS

This sounds catching but its becoming the norm. 5 years ago, it took a developer a week of development time to do what you can do in Amazon Web Services or Azure now in minutes. This has led to a paradigm shift, where you being to look at the various platforms and tools that are available to enable the developers to deliver great products to customers. Of course, there will always be custom development apps, but you can help your developers by getting them the right toolkit. There is no point reinventing the wheel when OTS open source components are sitting there, right? Products like Docker and Spring and concepts like DevOps are bringing huge value to organisations, enabling the delivery of software or microservices at enhanced speed. Also, the balance between buying OTS and building custom is a careful decision at product and strategic levels.

“The role of a developer is evolving to one like a top chef, where all the ingredients and tools are available, its just getting the recipe right to deliver beautiful products to your customer.”

Create Lean Ninjas!

shutterstock_215389786 - Copy

Evolving the cultural mindset of developers and the organisation toward agile development is super important. Having critical mass of development resources, plus defined agile processes to deliver business success  can really reshape how your organisation into one where value creation in a rapid manner can take place. However, its important to perform ethnographical studies on the organisation to assess the culture. This can help decide on which agile frameworks and practices (kanban, scrum, xp etc) can work best to evolve the development life cycle.

Implement the 10% rule

Could be slightly controversial, and can be hard to do. Developers should aim to spend 10% of their time looking at the new. The new technologies, development practices, company direction, conferences, training. Otherwise you will have a siloed mis-skilled pool of superheros with their powers bottled.

However, with lean ninjas and effective agile company wide processes, resources and time can be closely aligned to exact projects and avoid injecting randomness into the development lifecycle. Developers need time to immerse and focus. If you cant do that for them, or continously distract them with mistimed requests – they will leave. If you can enable them 10% is achievable.

Risk Awareness

shutterstock_331041884 (Large)

We are seeing an evolution in threats to enterprise all over the world, and in a software driven and defined world, getting developers to have security inherent design practices prior to products hitting the market can help protect companies. Moons ago, everything sat on prem. The demands of consumers mean a myriad of cloud deployed services are adding to a complex technology footprint globally. If they know the risk landscape metrics from where they deploy, they can act accordingly. Naturally, lining them up with business leaders on compliance and security can also help on the educational pathway.

Business and Technology Convergence

We are beginning to see not only evolution in development practices –  we are also seeing a new type of convergance (brought about by lean agile and other methods) where business roles and technology roles are converging. We are beginning to see business analysts and UX people directly positioned into development teams to represent the customer and change the mindset. We are seeing technology roles being positioned directly into business services teams like HR and finance. This is impacting culture, wherby the saviness in both directions needs to be embraced and developed.

shutterstock_334013903 (Large)

Growth Mindset

We mentioned mindset a lot in the article. That because its hugely important. Having the right culture and mindset can make all the difference in team success. As Carol Dweck talks about in her book “Mindset”, you can broadly categorise them into two – growth and fixed. This can be applied in all walks of life, but for team building it can be critical.

In a fixed mindset students believe their basic abilities, their intelligence, their talents, are just fixed traits. They have a certain amount and that’s that, and then their goal becomes to look smart all the time and never look dumb. In a growth mindset students understand that their talents and abilities can be developed through effort, good teaching and persistence. They don’t necessarily think everyone’s the same or anyone can be Einstein, but they believe everyone can get smarter if they work at it.

Creating a team where being on a growth curve and failures are seen as learning can really enable a brilliant culture. As Michaelangelo said “I am still learning”. Especially as we evolve to six generations of developers. How do we ensure we are creating and mentoring the next set of leaders from interns through to experienced people?

Check a Ted talk from Carol here – link.

And most importantly … HAVE FUN!

Augmented and Virtual Reality: Now more about improving User Experience

We cannot eat popcorn wearing a virtual reality (VR) headset – Zaid Mahomedy : ImmersiveAuthority.com

IIn 1995, the cringe worthy Johnny Mnemonic was released where he used a VR headset and gesture monitoring gloves to control the “future internet”. Even though this movie was over 20 years ago, it is only in the past few years we are seeing commercially ready Virtual Reality (VR) and Augmented Reality (AR) technologies hit the market.

If you watch this clip, you will hopefully notice two aspects. The first is that the technology is clunky. The second is that the predicted user experience (UX) he has is rich (for the decade of movie production): information is available at speed, the gloves are accurate, and the path and usability is seamless. When he moves his hands, the VR3 responds instantaneous. It assists him at every turn. Yet twenty years later, we have not reached this quite yet. Why? Because the focus needs to shift from technology to other aspects to enable this industry to flourish.

1: Technology Moves Aside for User Experience.

A large amount of technology companies efforts in this space in the past two years has been mostly focused at determining can they squeeze enough compute power onto a pair of glasses. Other questions to be answered were around if the battery will last a decent amount of time and will the heat emissions be low enough not to inconvenience the user. Whilst there are still optimizations to be performed, the core of the technology has at least been proven, along with some clever innovations around leveraging smart phones to save on hardware investments.

In the coming years we will see a larger amount of these companies focusing on user experience we have with these technologies – ensuring the interfaces,gesture and motion recognition are close to perfect are high on companies to-do lists. The hardware road-map will ensure they are lighter, more robust and frankly – sexier to wear. Before we discuss other aspects of how improved UX will be the focus of the coming years, its not to stay that technology wont help on this. For example, the evolution of flexible compute paradigms specifically in the nano technology area will assist in building compute into glasses, instead of adding compute retrospectively.

2: Difference in Psychologies

Apart from the technology of VR and AR being quite different under the hood, the psychology of how they are used is also. With AR, we are injecting a digital layer between us and the physical world. With VR, we are immersing ourselves into a digital world. These are very different experiences and the user experience design must have this difference at its core. We must ensure that the layer we design for AR takes characteristics from both our physical environment and our own personas. With VR, its much more emphatic to ensure the person feels comfortable and safe in that world.

3: Interfaces to VR/AR UX

The UX Design of AR and VR technologies and applications will require careful management of multiple input styles. Using wearables, voice recognition, AR and AI, we will start to see seamless blending and integration with how technology interacts with us across various senses. Touch devices are still being used, but they will move aside for voice recognition, movement tracking and even brain waves to be used to control these smart devices. The communication will be much faster and intimate, and will force designers to completely rethink how we interact with these devices.

4: The Role of AI in UX

The UX of these devices will also require more human like interactions, to built trust between the devices and the users in an organic manner. We are seeing this with voice control technology like Siri and Google Home, but they are understanding our voice, with some sample responses. Soon they will learn to evolve their speech.

Artificial intelligence will take hold of the user experience to analyze the reaction to different experiences and then make changes in real time to those assessments. UX will become a much more intuitive and personalized experience in the coming years.

5: Convergence of VR and AR Standards

Already we are seeing a myriad of startups evolving in the space, some focusing on content development in software, some on the actual hardware itself. Some are brave enough to have both on offer. We also have the larger companies creating divisions to provide offerings in this space. Choice is great, but when it becomes painful trying on your fourteenth pair of glasses at your average conference, it is not. When one takes time to observe how companies are beginning to partner up to offer solutions ( a trend extremely common in the IoT industry) it is a small step towards some form of standardization. Excessive choice can be bad from a UX perspective, as with such segregation in initial design makes it harder for app designers to get it right on the hardware.

6: Realistic Market Sensing

At some point, we have to get away from the “Toys” feel for these devices. We put them on for ten minutes in an airport or at an event to get a wow from it. Whilst the applications in the gaming industries are there to be seen, companies are beginning to focus on where else the market will be. Certain devices have flopped in the past two years, and you would wonder why with such strong brands. The first reason was awful UX. The second was the market just was not ready, with a distinct lack of content to make them anyway useful. Just because a few of these devices fail, doesn’t mean the movement stops. (Below info-graphic source is washington.edu)

Consumer and Industrial applications have very different requirements from a market perspective, with content choice and look and feel very important for consumer markets, system performance and governance sitting higher in industrial use cases. With the costs associated with adding these technologies to industrial environments under the microscope, companies must focus strongly on measuring and building the return on investment (ROI) models.

7: Protecting the User and the Experience

With these technologies predicted to get even closer than headsets (smart contact lenses for example -link here), its quite important the UX designers can intrinsically build in comfort and safety into any application. Too many times we have seen people fall through something whilst wearing a headset (more so with VR technologies). And that’s just the physical safety. When the threshold between physical and augmented worlds gets closer and closer (mixed reality), we want to avoid a scenario of interface overkill.

Whilst the past few years may indicate that these technologies are fads, the reality is far from it. They will become part of our social fabric as a new form of mobile technology. Ensuring the users experience with these technologies will be the critical enabler in their success and adoption rate.

Designing for AR and VR entails there be better understanding of a user’s need when it comes to context of use. It’s about building connections between the physical and digital world, requiring an interdisciplinary effort of service design, interaction design and industrial design.

Machine Learning 1.0 over Coffee

Article aimed at anyone (technical or non technical) who wants to understand the steps in Machine Learning at a high level. Readable in five minutes over coffee. I think.

What is Machine Learning?

Today we live in a world of seemingly infinitesimal connected devices, in both personal and commercial environments. The currency associated with these devices is data, which whizzes around in near real time, is stored locally and in cloud environments. The types of data vary greatly, with text, audio, video and numerical data just a sample of the data modalities generated.

As this data is a currency, there is value associated with it, but how do we extract this value? A high growth area is called data science which is used to extract value and insight from this data. It has numerous ingredients in the recipe, with data mining, data optimization, statistics and machine learning key to generating any successful flavor. And like an good recipe, you need a good chef. These chefs in data terms are called Data Scientists, who use a wide variety of tools to glean insight from the data to deliver impact for your business. The data-sets themselves can either be uni-variate (single variable or feature), or multivariate (multiple variables or features). A persons age would be an example of uni-variate, whereas multivariate would expand a person’s feature set to include age, weight and waist size for example.

Why do it?

Machine learning (ML) is born out of the perspective that instead of telling computers how to perform every task, perhaps we can teach them to learn themselves. Examples include predicting the sale price of your house based on a set of features (sq. feet, number of bedrooms, area), to try to determine if an image is of a dog rather than a cat to determining the sentiment of a set of restaurant reviews to be positive or negative. There are a host of applications across many industries, some of these are shown below (source Forbes)

Before the magic is induced from the algorithms, perhaps the most important step in any machine learning problem is the upfront data transformation and mining, towards optimization. Optimization is required as most of the algorithms that “learn” are sensitive to what they receive as an input, and can greatly impact the accuracy of the model that you build. It can also ensure you have a thorough understanding of your data-set and the challenge you are trying to solve. Some of the data transformation and mining techniques include record linkage, feature derivation, outlier detection, missing value management and vector representation. All this is sometimes called “Exploratory Data Analysis”.

Techniques once Optimized

Once data is presented in the right manner, there are a number of machine learning techniques one can apply. They are broken in supervised and unsupervised techniques, with supervised learning taking an input data set to train your model on, and with unsupervised no data-sets are provided. Unsupervised techniques include learning vector quantification and clustering. Supervised techniques include nearest neighbors and decision trees. Another techniques is Reinforcement learning, and this type of algorithm allows software agents and/or machines to automatically determine the ideal behavior within a specific context, to maximize its performance.

Verifying your model is also an important step, and we often use confusion matrices to do that. This involves building a table of four results – true positives, true negatives, false positive and false negatives. A set of test data is applied to the classifier and the result are analysed to assess performance. Sometimes, the result of the model is still questionable. When this happens, machine learning has an answer in the form of ensemble methods, which essentially you build a series of models that you build your final prediction from. Examples here include boosting and bagging on the training data. Bagging splits the training data into multiple input sets, boosting works by building a series of increasingly complex models.

There are complimentary techniques used in any successful machine learning problem – these include data management and visualization, and software languages such as python and java have a variety of libraries that can be used for your projects.

Going further

Taking a step further from machine learning, you are into a complimentary area called artificial intelligence (AI), which leans more on methods such as neural networks and natural language processing which look to mimic the operation of the human brain. This is showing how human centric design in technology is evolving, and how much excitement there is for how humans and technology will work together in the future. It can be said this excitement is born from revealing that as we evolve our understanding of what it means to be human, it outweighs anything that technology alone can deliver. People have always been at the core of innovation, and this has led to an evolution in how improved our lives are.

Man vs Machine: Why the competition?

With the continued evolution of industries such as Data Science and the Internet of Things, there is a mix of excitement and fear amongst the populous. Excitement for what they can do for our lives or businesses, but fear of what it will mean for humanity.

The fears are normally sourced from the media or some childhood movies we watched where pretty large robots take over planet earth. Quotes of “Will the robots take our jobs?” “There will be nothing left for us to do with the evolution of the computer”.

In reality a synergy between humans and technology can lead to better all round solutions, rather than in isolation. This is something that is rarely considered in current engineering circles. With so much technology choice, why would we need to stupid humans?

A brief story to set the tone

As far back as 2007, I hosted questions like this as part of the day job. Increased automation in manufacturing is a natural spore for questions of this nature. An example of this was an computer vision application that I built for a Masters dissertation whilst working for Alps Electric (one of the coolest companies in the world). It was inspecting graphics on buttons for correctness, both in finish and symbol.  Naturally this was a task done by humans historically. We were using classification techniques to perform the task on the images, and the receiver operator characteristic curves (ROC Curves) showed that the classifier was right 93% of the time, which was a pretty good first pass result. Please note that this was a time that data science was called “doing your job”.

We wanted to achieve 100%, so in order to improve the algorithm, we decided to use the main source of intelligence in the room, the humans! By presenting the failures to the operator on the production line, and asking them a particularly simple binary yes/no “Is this a genuine failure?”, saving their response and the original image, we were able to get the classifier to close to 99% accuracy.

This proved something that I felt was always the case. Humans and machines can work in tandem as opposed to viewing it as a competition. With the rapid advancement of technology, along with the obsession with using technology to optimize our lives, I pose a question: Have we forgotten how these can compliment each-other? If any data science/machine learning application can get an accuracy level of 70% for example, we try to squeeze extra accuracy out of it through “fine tuning” the algorithm. Perhaps we could present the results in some way to a human for final classification?

Bring it all together

Last April, I tried to draw out how I saw Data Science, IoT and Intelligence (both computing and human) fit together, which is shown below. It is an evolutionary map of sort, where we have always had the verticals and data modalities (data type), and we began on our data journey by building some simple data processing/mining applications (either by us manually or by using algorithms). And lots of the current challenges in data science can be solved by this tier. However, we are seeing an increase in the requirement to bring in machine learning applications to solve more advanced challenges. This is a natural evolution towards artificial intelligence,or deep learning. If we look right down the map in a holistic sense, this is where the top class really comes to the fore.

Humans are by our very nature, true intelligent, which evolves are we do, and also NOT very good at mass processing. Computing on the other hand are not so intelligent to begin with, but are incredibly good at mass processing. A natural hybrid would be true intelligence and mass processing, and that should be the aim for modern Artificial Intelligence companies/ enthusiasts.

Picture1

Now I am not saying that all applications in IoT and Data Science can be solved like this. Of course there will be exceptions. But there are some real tangible benefits to this approach. Consider the area of street crime. Imagine every camera in a city feeding video into a central location, and asking a human to monitor it. In reality, this is actually happening individually per building, park, mall, where security guards are monitoring areas in real time. With advancements in video analytics, it was feared that technology would replace humans. But it is not the case. What has happened as more devices/cameras hit our streets, it becomes impossible to monitor everything. By using advanced video analytics/machine learning capability to flag the anomalies to security, it means they can monitor a bigger space.

Thankfully, one of the high growth areas in technology is in Human Machine Interfaces (HMI’s), and there are some really good examples on how humans and computers can work together. Daqri‘s smart helmet is one such product, which is the worlds first wearable HMI. Their mandate is to use technology to improve and optimize how we work. by integrating compute, sensors and computer vision technology into a well designed helmet. Work, in the Future.

FullSizeRender (35)

As we enter the next phase of digital transformation, ask yourself: How can humans improve/compliment the work of technology in your application?

Closing off Web Summit 2015 – Day 2/3

And so it ends. The Web Summit on Irish shores finished on Thursday (for now), and I must admit there was an athmosphere of “what if” and that of sombreness. But we cannot allow this to affect our perspective and thinking of the impact this conference has had on Ireland tech landscape.. Paddy Cosgrave has built a conference which he began with 400 attendees to now 35,000. Let’s put that into context in regards to people’s perspective of Ireland as a Tech Hub. With over 100 countries represented, and technology itself ensuring their own tech landscape is quite small, the voices of the 35,000 will translate into millions. And I am certain the conversation will be about Web Summit, the friendliness of the services and the vibrant Night Summit, and not the number/cost of hotels, government and traffic.

Wednesday was a great day and one of the best I have had at a web summit event. It’s started on the Machine Stage, where a panel including Nell Watson from Singularity spoke on how machines and humans will coexist and complement each other in our smart future. I liked how Nell spoke about how the seamless integration of machines, and the governance of same will be a key piece of the puzzle.

Nell Watson from Singularity speaks on Machine Stage
Next up on Machine was another panel, which included Dr. Joe Salvo from GE and Dr. Said Tabet from EMC. The panel was expertly hosted by Ed Walsh who is the director of technology vision for EMC. Whilst interviewing the guys, Ed brought out not only the technology vision required for IoT, but also the collaboration that can be enabled by consortiums like the IIC, of which Dr Salvo and Dr Tabet have been so instrumental in building.

Ed Walsh hosting a panel session on the industrial internet
As already mentioned, the proliferation of Virtual reality was evident, and I got a demo of Amazons audible technology! It was quite neat!


Friday was a more relaxed day, with numbers down a little but this allowed for a different kind of networking experience. It was a day to chat with as many startups as possible, and to catch some great talks. One that stood out was on centre stage, where a panel (including Christine Herron from Intel Capital, Albert Wenger from Union Square Ventures, Mood Rowghani from KPCB ) was hosted by Charlie Wells of the Wall Street Journal. Topic discussed – tomorrow’s tech landscape. A growth or just a bubble?

 

Panel Discussion on Future of Tech
Well it looks like what Nell mentioned above is already happening from who I bumped into!

And so, we are off to Lisbon. Whilst I believe that there will be challenges there also, it is Cosgraves personality that will shine through. An excellent CI labs data science company spawned out of the web summit, and whist there is that data science feel to a lot of the web summit, it’s this personality of Cosgrave and his team that still makes this event stand above many.

Numenta and MemComputing: Perfect AI Synergy

the-brain

Let’s look at two forces of attraction that are happening in the technology space, specifically looking at creating true artificial intelligent systems, utilizing advances in both software and hardware technologies.

For years, even decades we have chased it. AI has been at the top of any list of research interest groups, and while there have been some advances, the pertinent challenge has been that advances in hardware electronics in the 70’s and 80’s occurred, software design was lagging behind. Then, software advanced incredibly in the past decade. So now, in July 2015, we reach a key point of intersection of two “brain based technologies”, which could be built together in a way that may lead to “true AI”.

At no other point in history have we had both hardware and software technologies that can “learn” like we can, whose design is based on how our mind functions.

Numenta

First, let’s look at Numenta. Apart from having the pleasure of reading Jeff Hawkins excellent book “On Intelligence”, I have started to look at all the open source AI algorithms ( github here) that they provide. In a journey that start nine years ago, when Jeff Hawkins and Donna Dubinsky started Numenta, the plan was to create software that was modeled on the way our human brain processes information. Whilst its been a long journey, the California based startup have made accelerated progress lately.

numenta-icon512

Hawkins, the creator of the original Palm Pilot, is the brain expert and co-author of the 2004 book “On Intelligence.” Dubinsky and Hawkins met during their time building Handspring, they pulled together again in 2005 with researcher Dileep George to start Numenta. The company is dedicated to reproducing the processing power of the human brain, and it shipped its first product, Grok, earlier this year to detect odd patterns in information technology systems. Those anomalies may signal a problem in a computer server, and detecting the problems early could save time, money or both. (Think power efficiency in servers)

You might think, hmm, that’s not anything great for a first application of algorithms based on the mind, but its what we actually started doing as neanderthals. Pattern recognition. First it was objects, then it was patterns of events. And so on. Numenta is built on Hawkins theory of Hierarchical Temporal Memory (HTM), about how the brain has layers of memory that store data in time sequences, which explains why we easily remember the words and music of a song. (Try this in your head. Try start a song in the middle.. Or the alphabet.. It takes a second longer to start it). HTM became the formulation for Numenta’s code base, called Cortical Learning Algorithm (CLA), which in turn forms the basis of applications such as Grok.

Still with me? Great. So that’s the software designed and built on the layers of the cortex of our brains. Now lets look at the hardware side.

 

Memcomputing

After reading this article on Scientific American recently, and at the same time as reading Hawkins book, I really began to see how these two technologies could meet somewhere, silicon up, algorithms down.

Memelements

A new computer prototype called a “memcomputer” works by mimicking the human brain, and could one day perform notoriously complex tasks like breaking codes, scientists say. These new, brain-inspired computing devices also could help neuroscientists better understand the workings of the human brain, researchers say.

In a conventional microchip, the processor, which executes computations, and the memory, which stores data, are separate entities. This constant transfer of data between the processor and the memory consumes energy and time, thus limiting the performance of standard computers.

In contrast, Massimiliano Di Ventra, a theoretical physicist at the University of California, San Diego, and his colleagues are building “memcomputers,” made up of “memprocessors,” that can actually store and process data. This setup mimics the neurons that make up the human brain, with each neuron serving as both the processor and the memory.

I wont go into specifics of the building blocks of how they are designed, but its based on three basic components of electronics – capacitors, resistors and inductors, or more aptly called memcapacitors, memresistors and meminductors. The paper describing this is here.

Di Ventra and his associates have built a prototype that are built from standard microelectronics. The scientists investigated a class of problems known as NP-complete. With this type of problem, a person may be able to quickly confirm whether any given solution may or may not work but can’t quickly find the best solution. One example of such a conundrum is the “traveling salesman problem,” in which someone is given a list of cities and asked to find the shortest route from a city that visits every other city exactly once and returns to the starting city. Finding the best solution is a brute force exercise.

The memprocessors in a memcomputer can work together to find every possible solution to such problems. If we work with this paradigm shift in computation, those problems that are notoriously difficult to solve with current computers can be solved more efficiently with memcomputers,” Di Ventra said. In addition, memcomputers could tackle problems that scientists are exploring with quantum computers, such as code breaking.

Imagine running software that is designed based on our minds, on hardware that is designed on our minds. Yikes!

In a future blog, I will discuss what this means in the context of the internet of things.

brain-computer

 

 

Nell, Google and a Half Pipe! EnterConf Belfast – Day 2

Quote of the day. “Counterfeiting is an insidious problem in life sciences, our network tenant cloud can help stop it” – Shabbir Dahod – TraceLink, Inc

As EnterConf entered its second day, I continually saw the benefit of having more detailed discussions with people in the Enterprise sector. Even during the night events (the speaker dinner in the Harbour Commissioners Office, great venue, followed by a few sociables in the Dirty Onion Bar), I kept monitored the dynamics taking place. The networking normally began with two people, but the circles were growing, joining to form what I like to call “RoundStandUps”. These were normally not short conversations, and collaboration was inherent in the voices and chatter. There also was a deep and satisfying undertone, which was an energy to keep “building great” in Ireland.

Check out the Half Pipe! Hope its at Web Summit! 🙂

Half Pipe at EnterConf
Half Pipe at EnterConf

Kicking us off on Centre Stage was none other than the inspirational futurist Nell Watson from Singularity University, who is also the CEO of Poikos, the smartphone 3D body measurement company. She talked about virtual employees, how we will replicate the human mind through AI in 20 years (and run business through AI). I liked how Nell bridged the machine and human inter-dependencies.  It was an insightful talk, and having spent the past year looking at machine intelligence (from both a hardware and software implementation perspective), I am seeing more and more futurists thinking like this.

Nell Watson, CEO of Poikos on Centre Stage
Nell Watson, CEO of Poikos on Centre Stage

A few talks focused on our evolving workplace. David Hale, from Gigwalk spoke on the Insight stage on “Deploying Technology to Power Mobile Field Teams and Maximise Work Efficiency”. David spoke on how mobile tools for consumer brands and retailers are being used to more effectively manage field teams, gather in-store data and direct resources to improve retail execution ROI. David also spoke about how our employees are changing, and how companies have to empower the “Millennial Employee”, whose requirements include flexibility, and having a social and online mindset.

David Hale, from Gigwalk on the Insight Stage

Shabbir Dahod – TraceLink, Inc, spoke on the Centre stage, his topic – “Delivering the Internet of Things (IoT) to the Enterprise”, and it was one of the highlight talks of the summit I found. Shabbir spoke about how Tracelink were the world’s largest track and trace network for connecting the Life Sciences supply chain and eliminating counterfeit drugs from the global marketplace, by using their Life Sciences Cloud, configured in a network tenant architecture.

Shabbir Dahod – TraceLink, Inc

Thomas Davies, Head of Enterprise for Google drew a huge level of engagement from the crowd with his talk on the next stage of collaboration. Thomas mentioned the evolution of how we collaborate, but even since the early 1980’s the structures were quite rigid and have not changed that much up to a few years ago. But now, customer and employee expectations have changed. They are fast, 24/7, global and personalised. He discussed how employees and organisations are more efficient when they collaborate. “We shape our tools, and then our tools shape us” – Marshall McLuan.

Thomas Davies (Google) in exhuberant form on Center Stage

One last talk Ill cover is a topic that is somewhat under the covers of Enterprise IT, and I am glad that Engin Akyol of Distil Networks talked on “Dark Cloud: Cloud Providers as a Platform for Bot Attacks”. Engin first spoke about good bots, which do serve a purpose for major cloud providers. But this talk was focusing on bad bots, which slow down application performance and skew analytics. As the volume of cloud platforms continues to scale, this leads to ease in setting up bot networks which can pilfer content from websites, or launch other malicious attacks.

Engin Akyol of Distil Networks

So, ill sign off from EnterConf 2015, and onto Web Summit in November, with many events, collaborations and new experiences in between. As a two day conference, perhaps I built less contacts than I expected to. But the ones I did are more meaningful contacts, and EnterConf allows their attendees an environment to do that. I also sat in on round-tables on big data and security, which gave yet another dynamic. It really is a conference experience I will be returning to. Special mention to all the organisers, volunteers and the inspiring venue. Goodbye Belfast, hello Dublin!

Oh, I almost forgot, I really hope Krem Coffee are at Web Summit, awesome coffee!

Pre Cloud Security Considerations in IoT

Introduction

Over the past decade, hybrid cloud adoption has steadily increased, with closed network becoming less the option of choice. But this comes at a cost to security and trust metrics. As we become more dependent on intelligent devices in our lives, how do we ensure the data that is within the web is not compromised by external threats that could threaten our personal safety?

As the adoption of IoT increases, so does the risk of hackers getting at our personal information. As Alan Webber points out on his RSA blog6, there are three key risk areas or bubbles that companies need to be aware of.

1: Fully enabled Linux/Windows OS systems: This area concerns itself with those devices that are not part of a normal IT infrastructure, but are still run on full operating systems, such as Linux or Windows. As everyone knows, prior to IoT, these OS have vulnerabilities, and when they are deployed in the “free world”, they are not as visible to IT admins.

2: Building Management Systems (BMS): This pertains to infrastructure systems that assist in the management of buildings, such as fire detection, suppression, physical security systems and more. These are not usually classified as threatened, yet shutting down a fire escape alarm system could lead to a break-in scenario.

3: Industry Specific Devices: This area covers devices that assist a particular industry, such as manufacturing, navigation, or supply chain management systems. For example, in the case of a supply chain management system, route and departure times for shipments can be intercepted, which could lead to shipment intercept and reroute to another geographical location.

So, how do we guard against these types of risks, and make the devices themselves and also the web of connected devices less dumb? Security must be looked at holistically to begin with, with end to end security systems being employed to ensure system level safety, and to work on device level embedded control software to ensure data integrity from edge to cloud.

Data routing must also be taken seriously from a security standpoint. For example, smart meters generally do not push their data to a gateway continuously, but send it to a data collection hub, before sending it in a single bulk packet to the gateway. Whilst the gateway might have an acceptable security policy, what about the data collection hub? This raises a major challenge, as how does one micro manage all the various security systems their data might migrate across?

Security Design Considerations

Early stage IoT devices unfortunately had the potential loss of physical security in their design, so it is necessary for security officers to be aware of the focus and location of their security provisioning.

To apply security design to the devices is not the most utilized method (similar to internal storage), as the cost and capacity of these devices is counterproductive to same. The devices would look to ensure consistency of communication and message integrity. Usually, one would deploy the more complex security design upfront within the web services that sits in front and interacts with the devices. It is predicted as the devices themselves evolve, and nanotechnology becomes more and more of an enabler in the space, the security design will become closer to the devices, before eventually becoming embedded.

It is proposed that shared cloud based storage will play a pivotal role in combating the data volume perplexity, but not without its issues. How do we handle identification and authentication? How do we ensure adequate data governance? Partnerships will be necessary between security officers and cloud providers to ensure these questions are answered.

Searching for the holy grail of 100% threat avoidance is impossible, given the number of players in an entire IoT ecosystem. Whilst cloud service providers own their own infrastructure, it is very difficult for them to know if the data that is received has not being compromised. There are ways to reduce this, but using metadata and building “smarts” into the data from typical known sets as it transitions from edge to cloud. It seems like an approach of something equivalent to a nightclub security guard checking potential clients to their nightclub is a useful analogy. “Whats your name (what type of data are you), where have you been tonight (whats your migration path), how many drinks have you had ( what transactions happened on your data).!!

IoT Security and Chip Design

One area that could bring about increased data privacy is the increased usage of the concept of “Trusted Execution Environments” or TEEs, which is a secure area in the main processor of the device. This ensures that independent processing can occur on critical data within the silicon itself. This enables trusted applications to run to enforce confidentiality and integrity, and protect against unauthorized cloning or object impersonation by remove and replace. Taking it into a real world example, a home owner tampering with their smart meter to reduce their energy bill would be one scenario that would be avoided with TEEs.

If cloud services companies can somehow increase their influence on the IoT device design (outside of the popularity of TEE’s in cellular applications). then utilizing technology such as this will ensure less risk once the data reaches the cloud. Collaboration efforts should be increased between all parties to ensure best practice across the entire IoT landscape can be established.

Figure 1. Generalized framework for a secure SoC
Figure 1. Generalized framework for a secure SoC [7]
References:

6 RSA RISKS of IOT

https://blogs.rsa.com/3-key-risk-areas-internet-things/

7: EDN SOC TE

http://www.edn.com/design/systems-design/4402964/2/Using-virtualization-to-implement-a-scalable-trusted-execution-environment-in-secure-SoCs

IoT meets Data Intelligence: Instant Chemistry

Even in the ideal world of a perfect network topology, a web of sensors, a security profile, a suitable data center design, and lots of applications for processing and analyzing, one thing is constant across all of these, the data itself. Data science is well talked about, and careers have been built from the concept. It is normally aimed at the low hanging fruit of a set of data, things that are easily measured. Science will take you so far, but it is data intelligence that will show the true value, with capability to predict impact from actions, and track this over time, to build modelling engines to solve future problems.

Even the data set is different for data intelligence as opposed to data science, which relies on lots and lots of data sets (Facebook, working out effectiveness of their changes/features etc). It is more complex, smaller even, and can be a data set contained in a single process or building.  Imagine a hospital’s set of machines producing live data to an analytics engine, and using historical models to compare live data to gauge risk to the patients? It can have real tangible benefit to life quality. Commonly called “Operational Intelligence”, the idea is to apply real time analytics to live data with very low latency. It’s all about creating that complete picture: historical data and models working with live data to provide a solution that can potentially transform all kinds of industry.

At the core of any system of this kind is decision making. Again, one must strive to make this as intelligent as possible. There are two types of decision making. The first is stagnant decision making and the second is dynamic decision making. With the assistance of mathematical models and algorithms, it will be possible for any IoT data set to analyze the further implications of alternative actions. As such, one would predict that efficiency of decision making would be increased.

At the IoT device level, there is scope to apply such a solution. Given the limited storage capacity on the devices themselves, a form of rolling deterministic algorithm that looks to analyse a set of sensor readings, and produce an output of whether or not to send a particular measurement to the intelligent gateway or cloud service.

Another proposed implementation on-device might be to use a deviation from correctness model, such as the Mahalanobis-Taguchi Method, which is an information pattern technology, which has been used in different diagnostic applications to help in making quantitative decisions by constructing a multivariate measurement scale using data analytic methods. In the MTS approach, Mahalanobis distance (MD, a multivariate measure) is used to measure the degree of abnormality of patterns and principles of Taguchi methods are used to evaluate accuracy of predictions based on the scale constructed. The advantage of MD is that it considers correlations between the variables, which are essential in pattern analysis. Given that it can be used on a relatively small data set, with the greater the number of historical samples the greater the model to compare it to, it could be utilized in the example of hospital diagnosis. Perhaps the clinician might need a quick on-device prediction around a patient’s measurement closeness to a sample set of recent hospital measurements?

Taking this one stage further, if we expanded this to multiple hospitals, could we start to think about creating linked data sets, that would be pooled together to extract intelligence. What if a weather storm is coming? Will it affect my town or house? Imagine if we could have sensors on each house, tracking the storm in real time and try to predict the trajectory and track direction changes and the service could then communicate directly with the home owners in the path.

With the premise of open source software, consider now the concept of open data sets, linked or not. Imagine if I was the CEO of a major company in oil and gas, and I was eager to learn from other companies in my sector, and in reverse allow them to learn from us through data sets. By tagging data by type (financial, statistical, online statistical, manufacturing, sales, for example) it allows a metadata search engine to be created, which can be then be used to gain industry wide insight at the click of a mouse. The tagging is critical, as the data is not then simply a format, but descriptive also.

Case Study: Waylay IoT and Artificial Intelligence11

Waylay, an online cloud native rules engine for any OEM maker, integrator or vendor of smart connected devices, proposes a strong link11 between IoT and Artificial Intelligence.

Waylay proposes a central concept for AI, called the rational agent. By definition, an agent is something that perceives its environment through sensors and acts accordingly via actuators. An example of this is a robot utilizes camera and sensor technology and performs an action i.e. “Move” depending on its immediate environment. (See figure 8 on next page).

To extend the role of an agent, a rational agent then does the right thing. The right thing might depend on what has happened and what is currently happening in the environment.

Figure 8: Agent and Environment Diagram for AI [11]
Figure 8: Agent and Environment Diagram for AI [11]
Typically, Waylay outlines that an agent consists of an architecture and logic. The architecture allows it to ingest sensor data, run the logic on the data and act upon the outcome.

Waylay has developed a cloud-based agent architecture that observes the environment via software-defined sensors and acts on its environment through software-defined actuators rather than physical devices. A software-defined-sensor can correspond not only to a physical sensor but can also represent social media data, location data, generic API information, etc.

Figure 9: Waylay Cloud Platform and Environment Design [11]
Figure 9: Waylay Cloud Platform and Environment Design [11]
For the logic, Waylay has chosen graph modeling technology, namely Bayesian networks, as the core logical component. Graph modeling is a powerful technology that provides flexibility to match the environmental conditions observed in IoT. Waylay exposes the complete agent as a Representational State Transfer (REST) service, which means the agent, sensors and actuators can be controlled from the outside, and the intelligent agent can be integrated as part of a bigger solution.

In summary, Waylay has developed a real-time decision making service for IoT applications. It is based on powerful artificial intelligence technology and its API-driven architecture makes it compatible with modern SaaS development practices.

End of Case Study 

Reference:

11: Waylay: Case study AI and IoT

http://www.waylay.io/when-iot-meets-artificial-intelligence/