In part two of our series on how technology is creating a smart grid, transformation specialist Steve Miller looks at how we can manage an increasingly complex digital ecosystem.

Part Two: Managing change

In Part One, we looked at how to lay the foundations for a smart grid. We ended on the third stage of this process: continuous improvement. This lines up with where we are in practice, which is the fairly early stages of this iterative development process.

It is also where the transformation becomes more specifically digital, rather than just broadly ‘technological’. There will of course continue to be technological contributions; new and improved forms of generation, innovative forms of storage, and even new transmission media. But when it comes to making this smart, it is very much a case of digital transformation. And the main reasons for this are data analytics and automation.

Before we look at the greater extent of the changes we can expect, it’s important we first understand what has happened – and will continue happening – to the structure of the grid.

Complex ecosystems

When we started, we had relatively simple picture. There was baseline supply augmented by variable capacity to meet changes in demand. Over time we have added ancillary services, introduced greater redundancy, and put in place layers of monitoring infrastructure. We’ve been experimenting with variable inputs and consumers who’re no longer solely reliant on the grid for their demand.

At the same time as all this, the edge of the grid will continue to become fuzzier. More granular reporting on demand allows for more precise management of resources. With this, we are able to develop demand profiles down to the level of individual buildings, rooms, and even individuals or devices.

For example, if you know my typical work movements then it’s possible to predict where I’ll be at any given time. From that, device activity will tell you when I’ve got my phone and laptop with me and when they’re being used. Scale that up over all your users and you get an accurate – if somewhat amorphous – profile of your ‘energy behaviour’ almost by the minute.

With this, there’ll be ever-greater integration between the macro and micro scales. If we can predict to within a small percentage what power will be required on smaller scales, we can work towards the holy grail of absolute efficiency. Like perfection, we won’t ever achieve it, but it’s the goal our decision-making is geared towards.

Managing this rising complexity is possibly the challenge for national energy infrastructure over the next decade. Clear reporting and coherent planning will allow us to rationalise where possible. Modular designs and industry standards will help us get the most out of the fractal-like creep of increasingly granular thinking. Adopting a DevOps mindset of iterative design will help deliver change in manageable chunks.

Which is all a gross over-simplification of a colossally nuanced and dynamic process. Which is why we need a whole new paradigm of automation.

Digitised change management

As we get smarter about our energy economy, there is a tendency towards making it more complicated. Nowhere is this truer than when it comes to managing our new wealth of data, in the quest to turn it into valuable information. Because, as should be clear by now, there is going to be a lot of it.

Not only that, but it will demand deeper analysis in order for us to retrieve the maximum useful information from it. More decisions that each are more time-intensive and difficult to parse than we’ve dealt with up until now. With information overload already a growing challenge for businesses, we will need ways to overcome it.

Any serious data storage and analytics project now really means one thing: hyperscale cloud services, such as Microsoft’s Azure. This approach allows you to not only keep your data all in one place, but also in a format that makes it amenable to analysis. These Big Data repositories are sometimes called Data Lakes but might better be understood as Data Haystacks. What we want to do is find the needles of information they contain.

How we find them is by using advanced analytics and Machine Learning tools. There are plenty of these built into the same cloud environments as your data should by then be sat on. By pointing them at our data, they enable us to identify trends within it. What we’re then doing is refining our raw material (data) into a valuable output (information).And this is where we can get serious about automation. Stage one is using our data and machine learning to build a list of likely scenarios. We then task an artificial intelligence to watch data as it comes in and react accordingly. It can manage load-balancing, move power between batteries based on known demand trends… all the kind of thing we do now, but faster and with greater detail.

Stage two of AI automation would give a greater degree of autonomy to the grid’s self-management systems. They’d be able to not only tell us to do things like scale up generation or alert us to faults but takes steps to address these things themselves.

Finally, there’s a stage three. This is a grid that has nearly complete sensory oversight of itself. It can analyse this data in real-time and run its day-to-day activities autonomously. But it also does some deeper thinking, allowing it to advise us on both tactical and strategic matters. And it would start being able to factor in all that increased granularity from the fuzzy edge of the grid, allowing those who’re smart-grid ready to make very substantial savings.

To be clear, this isn’t a sentient grid. We’ve all seen the films; nobody needs that on their risk register. But it is a realistic long-term goal of automation. It is what a truly ‘smart’ – if not quite ‘intelligent’ – grid looks like.

That’s the perpetual endgame of the smart grid. In the final part of this series, we’ll explore what it will look like to succeed in this new energy ecosystem.