August 10, 2018
The NSF just had a workshop on “Real-time Data Analytics for the Resilient Electric Grid”.
Terry Nielsen presented our ideas on how to reduce reliance on highly accurate network models.
Over the weekend prior to the 2018 IEEE PES general meeting in Portland, the National Science Foundation (NSF) and the Washington State University (WSU) hosted a workshop on “Real-time Data Analytics for the Resilient Electric Grid”. At this workshop there were some great discussions and ideas on how data analytics can help with the challenges of addressing grid resiliency. This is closely tied to one of the 10 big ideas for the future of NSF https://www.nsf.gov/news/special_reports/big_ideas/ called Harnessing the Data Revolution (HDR).
The invited workshop participants came from across the power system industry, academic community, and data scientists from outside of the power systems industry. The workshop was structured so that the first day was an opportunity for participants to present and share their related work in a series of presentations. The second half day included brainstorming breakout sessions on specific subtopics.
From the presentations it became clear that there are some very promising areas where data analytics and associated research has already yielded results. For instance, the adoption of phasor measurement units in the industry over the last decade has yielded a large pool of data that is being leveraged for research and even making it into vendor solutions being used by utilities. Another practical application that was presented by two researcher projects was the applicability of data analytics to optimizing electric vehicle charging.
The final portion of the workshop was breakout sessions for brainstorming on challenges that data science can address in the areas of improving grid resilience. I won’t describe here the list of all the ideas that came up, for one thing I couldn’t do justice to many of them.
I firmly believe that data science can reduce the need for power systems controls to rely on highly accurate network models. I have spent many years helping utilities build and leverage power system models used in EMS, DMS and OMS implementations. These network models are used by the software algorithms in these systems to optimize the dispatch of generation, minimize losses, and restore power faster, to name just a few applications. The biggest challenge in these implementations is cleaning up network models to get them to a point where they are accurate enough to produce the desired results.
Often when I hear of a failed, or massively over budget implementation of an EMS, OMS or DMS, the root cause is the unforeseen cost in getting the network model clean enough to support the algorithms so they don’t suffer from the age-old problem of garbage in, garbage out.
Some data scientists appear to have suggested to the seasoned power system engineers that algorithms and solutions based upon the physics of the system are no longer needed and that machine learning can replace all of these applications that have been developed over the last century. I do agree that leveraging pure machine learning based solutions might be a reasonable approach for a subset of the problems in the industry. In fact, there have been some great solutions in the industry such as Conservation Voltage Reduction (CVR) projects that have been successfully implemented without any power system model based algorithms used. These create great business cases due to the low initial costs to deploy the solution.
However, where things get really exciting is that there is some good research actively being pursued towards finding ways to combine the two approaches into a hybrid solution that is better than either alone. The biggest improvement will be if we can find approaches to some of these optimization problems that use both machine learning and power systems models, but don’t require years of effort to clean up the power system model. This work, while promising, faces some big challenges to make it to practical industry solutions. These include, the continued funding of the promising areas of research, replicating the research so that it is sound and proven in an academic setting, then crossing a big chasm of proving it to the industry practitioners and finally getting it to vendors who can develop, productize, sell and support solutions. These challenges are formidable, and I hope the NSF, the academic community, and most importantly the industry work together to address them.
Terry Nielsen, GridBright EVP of Utility Solutions