16 minute read

R&D100 Winners - Software/Services

Tactical Microgrid Standard Open Architecture

When soldiers set up remote bases, they need reliable power grids to run everything from general appliances to sensing, weapons, and communications equipment. Fuel convoys are frequently attacked, taking a high toll in human lives, and raising fuel costs up to $200 per gallon. Then, when the fuel and equipment arrive at the remote location, soldiers must set up and operate these power grids with little previous training and under harsh conditions. MIT Lincoln Laboratory’s Tactical Microgrid Standard (TMS) Open Architecture encapsulates significant advances from previously available microgrid capabilities and helps to solve these problems going forward. Through a unique combination of cutting-edge ideas and technologies from academia and industry, TMS makes it possible to connect a family of interoperable microgrid devices, accommodating multiple manufacturers and legacy equipment, and improving both acquisitions and field operations.

EverBatt: Argonne’s novel closed-loop battery recycling model

A closed-loop battery life-cycle model, EverBatt, developed at Argonne National Laboratory, helps address the growing national need for responsibly and sustainably handling the rising number of spent batteries. The model helps stakeholders better understand the cost and environmental impacts of battery recycling and accelerates the development of a more sustainable battery supply chain, which is critical to America’s energy security. In addition to comparing different recycling scenarios, EverBatt identifies cost and environmental hotspots throughout the life cycle of a battery. It also helps to direct and validate battery recycling research and development efforts and overcome potential barriers to process commercialization. Though this model only currently serves as a tool to the battery sector, its framework can, and will, be used for other products on the market. The methodology is the same for every product.

Offshore Risk Modeling (ORM) Suite

The Offshore Risk Modeling (ORM) Suite developed by National Energy Technology Laboratory, is a set of eight innovative, science, and data-driven computational tools and models that span the full offshore system. The suite is designed to improve the prediction and evaluation of offshore systems. Tools and models of the suite address key knowledge and technology gaps to improve upon conventional oil spill prevention methods. Using petabytes of data, the suite applies novel and efficient methods to cross-examine data across space and time to support predictive analyses for safer, more prudent future efforts and rapidresponse, real-time assessments. Over six years, the ORM suite has been validated and adopted for use by domestic and international regulators, researchers, academia, and industry. They have been used independently and synergistically to meet different end-user needs. The ORM suite has been utilized to assess geohazard and subsurface resources, support worst-case oil spill planning, predict a spill’s socioeconomic impact, and characterize offshore infrastructure lifespan.

ALFa LDS: Autonomous, lowcost, fast leak-detection system

ALFa LDS, developed at Los Alamos National Laboratory, is an affordable, robust, autonomous system for the detection of natural gas leaks. It operates 24/7/365 to locate natural gas leaks early, enabling fast repair. The detector distinguishes natural gas leaks from biogenic sources of methane. Methane and ethane sensor data and atmospheric wind measurements are fed from two compact instruments into a simulation-trained artificial neural network that is then able to detect, locate, and even quantify a leak. The featherweight platform is small enough to be deployed on a drone but powerful and intelligent enough to minimize fugitive leaks across the entire network of natural gas extraction, production, and consumption. This system outperforms competing leak detection technologies at a fraction of the cost. Elements of this gas leak detection technology have broad applications for other criteria pollutants — enabling portable, autonomous atmospheric surveys for a variety of contaminants.

Open Multidisciplinary Analysis and Optimization (OpenMDAO) framework

OpenMDAO is an opensource framework developed at NASA Glenn Research Center that allows users to solve the complex multidisciplinary analysis designoptimization problems. It can be used to streamline design processes for extremely complex systems, from experimental aircraft to wind turbines to space missions, saving significant cost and time. No other software tool makes the invaluable techniques involving analytic derivatives so broadly available to so many users, both expert and non-expert. Its increased efficiency removes the barriers of time, expense, and resources to facilitate the democratization of design optimization. OpenMDAO has made this unsurpassed computing power available for the first time to dozens of disciplines through a remarkably flexible and versatile framework interface. Together with its unique support for parallel as well as serial computing, this flexibility enables a distributed supercomputer capability that can’t be matched by any other commercially available MDAO tools.

Open-source software for designing CO2 capture, transport, and storage infrastructure

SimCCS2.0 is an open-source software package, from Los Alamos National Laboratory, that industry, researchers, and government can use to design carbon dioxide (CO2) capture and storage infrastructure that optimally links CO2 sources (such as power plants) with CO2 sinks (such as saline aquifers and depleted oil fields) to reduce industry carbon footprints and maximize revenues. Through SimCCS2.0-designed CCS infrastructure, energy and economic security can be maximized through a combination of tax credits for CO2 capture and CO2- enhanced energy production at the same time that industrial carbon footprints are minimized. SimCCS2.0 is user friendly and easily accessed through GitHUB, allowing a wide range of users, stakeholders, and decision makers to understand the potential of CCS. BP, Southern Company, Archer Daniels Midland, Battelle, the Department of Energy, Stanford University, and many others have already learned that SimCCS2.0 could transform how the world addresses the CCS grand challenge.

SAM reactor system analysis code

Researchers at Argonne National Laboratory have developed the SAM code modern system analysis tool for advanced nuclear reactor safety analysis. It provides fast-running, wholeplant transient analysis capability with advances in physical modeling, numerical methods, and software engineering. SAM expands beyond the traditional system analysis code to enable multidimensional flow, containment, and source term analysis, either through reduced-order modeling in SAM or via coupling with other simulation tools. It is able to model the integrated response of nuclear fuel, reactor, the coolant system, the engineered safeguards, the balance of plant operator actions, and all of the possible interactions among these elements to obtain a best-estimate simulation that includes both validation and uncertainty quantification. SAM provides accurate and efficient, state-of-the-art computation. The U.S.NRC has selected SAM as the primary system analysis tool for advanced reactor, design-basis accident analysis.

MIRaGE — Multiscale Inverse Rapid Group- Theory for Engineered Metamaterials

Sandia’s MIRaGE is an inverse design software that relates desired properties to groups of molecular symmetries that possess those properties. Using those symmetries to predict behavior, a metamaterial can be designed that is guaranteed to exhibit the desired properties. MIRaGE allows the researcher to explore various configurations, simulate the system, and validate the behavior. It provides a powerful tool to allow the designer to optimize the design by tuning it precisely to the requirements without guesswork. MIRaGE retains its speed across a variety of platforms, and it offers support at various levels of design proficiency. Metamaterials designed with MIRaGE will serve in a variety of specialized optics, such as advanced lasers, cloaking materials, and thin, flat lenses. MIRaGE is truly an all-in-one tool for the future of metamaterials research.

ResStock — A 21st-century tool for energy-efficiency modeling with unparalleled granularity

Researchers at the National Renewable Energy Laboratory have developed ResStock to bring energy efficiency modeling into the 21st century. A new level of detail captures potential upgrades that other approaches overlook while giving stakeholders the information they need to encourage these energy-and money-saving upgrades. This information helps save homeowners money and reduces energy consumption, alleviating strain on the grid. ResStock has identified upgrades worth $49 billion in annual energy bill savings, potentially reducing total U.S. residential energy use by 22%. It’s being used to pave the way for major energy initiatives in various cities and states, including the city of Los Angeles. The biggest impact of ResStock may eventually be its ability to reveal which upgrades pair best with variable generation, reducing peak load and enabling a 21st-century electric grid built on renewable energy.

Visibility Estimation through Image Analytics (VEIA)

The Visibility Estimation through Image Analytics (VEIA) technology, from MIT Lincoln Laboratory, provides an inexpensive and robust way to extract meteorological visibility from cameras — transforming weather cameras into weather sensors. With the proliferation of web-based camera imagery for monitoring conditions near airports and other remote locations, there is an opportunity to significantly expand the density of visibility observations, especially in areas where low visibility can have dire consequences. The VEIA algorithm uses the presence and strength of edges in an image to provide an estimation of the meteorological visibility within the scene. The algorithm compares the overall edge strength of the current image to those on a clear day to generate an edge strength ratio. The ratio is then converted to visibility in miles using a linear correlation coefficient. VEIA uses a multiple day composite of clear day images for the comparison to ensure that only permanent edges are considered (i.e. the horizon, roadways, buildings, etc.).

Neurodata Without Borders: Neurophysiology

Neuroscientists and scientific software engineers at Lawrence Berkeley National Laboratory collaborated on NWB:N to fill a critical gap in the neuroscience research community by providing not only a data standard but a rich software ecosystem surrounding the standard. NWB:N enables the re-use of data, facilitates collaboration across laboratories, and makes high-performance computing easily accessible to neuroscientific research. Unlike other data formats, NWB:N is open source, free to use, supports the full scope of neurophysiology experiments, and is optimized for storing and analyzing the increasingly large datasets being generated in the field today.

Commercial Building Energy Saver (CBES)

Small and medium-sized U.S. commercial buildings consume 47% of the building sector’s primary energy. However, retrofitting these buildings poses a significant challenge for owners because they usually lack the resources of large organizations, and don’t have low-cost access to tools that can be used to identify cost-effective, energy-efficient retrofits. The Commercial Building Energy Saver (CBES), developed by Lawrence Berkeley National Laboratory, bridges this gap by providing an easy-touse, powerful retrofit analysis tool tailored specifically for this subsector. CBES enables users to benchmark their building’s energy performance and identify and evaluate retrofit measures in terms of energy and cost savings, carbon emission reduction, IEQ, and investment payback, considering incentives and rebates. CBES serves a broad audience with various backgrounds and levels of data availability.

Traffic Aware Strategic Aircrew Requests traffic aware planner

Traffic Aware Strategic Aircrew Requests (TASAR), from NASA Langley Research Center, features a cockpit automation system that monitors for potential flight trajectory improvements and displays them to the pilot. These wind-optimized flight trajectory changes are precleared of potential conflicts with other known airplane traffic, weather hazards, and airspace restrictions. The objective of TASAR is to improve the process in which pilots request flight path and altitude modifications due to changing flight conditions. Changes may be made to reduce flight time, increase fuel efficiency, or improve some other flight attribute desired by the operator. Currently pilots make such requests to ATC with limited awareness of what is happening around them. Consequently, some of these requests will be denied resulting in no flight improvements and an unnecessary workload increase for both pilots and air traffic control.

Digital Twin Solutions for Smart Farming

DTSSF (Digital Twin Solution for Smart Farming), an innovative Artificial Intelligence and Human Intelligence digital twin service platform, aims to sustain experienced farming knowledge for digital workforce transformation and increase overall production performance in agriculture and aquaculture industries. DTSSF helps farmers to extract implicit knowledge from data and provides more precise and comprehensive insights of the production process and asset status, which leads to better decision making and operation strategies. With Institute for Information Industry’s (III’s) DTSSF, small to medium-size farming companies or farmers are able to receive intelligent, adaptive and dynamic facility management and farming decision-making suggestions via analyses, modeling, and learning.

Artificial Diversity and Defense Security

The Sandia-developed Artificial Diversity and Defense Security (ADDSec) technology automatically detects and responds to threats within critical infrastructure environments in real-time. With ADDSec, ICS environments have increased resiliency against attacks by automatically detecting and responding to threats in the first phase of an attack. The ADDSec solution can be retrofitted into existing networks using commercial or open source software that is freely available. Compatibility with legacy and modern hardware switches is especially important for ICS systems because they typically do not replace existing hardware for decades at a time to reduce costs and maintain high levels of availability. The ADDSec technology is a scalable solution and has been tested and evaluated with more than 300 end devices.

Severe Contingency Solver: Electric Power Transmission Analysis

Severe Contingency Solver for Electric Power Transmission (SCS-EPT) systems, developed at Los Alamos National Laboratory, is the only open-source software that reliably computes a solution for severely damaged power networks. By supporting multiple platforms and performing complex calculations, SCS-EPT removes the need for human intervention when analyzing damaged power networks — a game changer in this field. The software is license-free and cross-platform, which are necessary features for enabling powerful cluster computing. SCS-EPT guarantees a solution for power networks with hundreds to thousands of damaged components, which current commercial tools cannot offer. Without the need for human intervention, this customizable software quickly and reliably computes the impacts of component damage in complex extreme event analysis workflows.

Accelerating resilience and I/O for supercomputing applications

Lawrence Livermore and Argonne National Laboratories collaborated on the Scalable Checkpoint/Restart Framework 2.0 (SCR) enabling high-performance computing (HPC) simulations to take advantage of hierarchical storage systems, without complex code modifications. It utilizes fast storage tiers on HPC supercomputers to quickly cache application and resilience data so that applications can perform I/O operations orders of magnitude more quickly than with traditional methods and produce their results in less time. SCR has several features that support I/O and resilience for HPC applications that put it far above existing competitive products, including requiring few code modifications, providing full-featured support for checkpoint/ restart, and managing general application data in addition to checkpoint data.

CHIRP: Cloud hypervisorforensics and Incident Response Platform

A secure Cloud presence demands the ability to confirm unauthorized access, gauge the nature of the attack and its goals, gather and preserve evidence towards eventual prosecution, and monitor the location for any further intrusion. CHIRP, developed at Sandia National Laboratory, gives cyber defenders these abilities. CHIRP uses custom Virtual Monitor Introspection (VMI) via a Cloud hypervisor to provide digital-forensic capabilities — a first of its kind. Forensic artifacts are extracted from VMs in real-time without detection, allowing the defender to stealthily eavesdrop and capture adversary intentions, actions, and tools. CHIRP works with a diverse set of hypervisors and operating systems and is quickly deployed in on and off-premises Clouds with the click of a mouse.

Package manager for HPC Systems

Spack, from researchers at Lawrence Livermore National Laboratory, automates the process of building and installing scientific software on laptops, high-performance clusters, and supercomputers. Modern scientific software combines libraries written in many programming languages and is deployed on diverse computing architectures. To achieve the best performance in these environments, developers build software directly from source code. Spack automates the build workflow without sacrificing software performance or flexibility. It reduces deployment time for large software stacks from weeks to hours, and it enables end users and developers to install software without the aid of specialized staff.

Consequence-driven Cyberinformed Engineering

Consequence-driven Cyber-informed Engineering (CCE), from Idaho National Laboratory, offers one of the best pathways to sustaining operations of the nation’s critical infrastructure systems. These systems control and deliver essential services such as electricity, clean water, and medical care so their disruption or failure can be catastrophic. By re-engineering key processes while armed with an understanding of attackers’ tactics and options, critical infrastructure stakeholders are provided with CCE tools to remove the highest consequence targets from the table. Proven through pilot tests, CCE achieved its goal of identifying processes and functions that must not fail, then selectively reducing or eliminating digital pathways attackers could reach and affect. The methodology outlines steps organizations must take to transform their understanding of security risks to the most important processes, so critical infrastructure owners and operators can evaluate and secure valuable equipment and safeguard critical processes.

MLSTONES: Bio-inspired malicious software detection and analysis

MLSTONES is a one-of-a-kind malware detection tool, created by computer scientists at Pacific Northwest National Laboratory, that protects national infrastructures and defends against large-scale attacks with the potential to debilitate a nation by doing something that other tools do not — identifying malware that has never been seen before. Available detection products typically require prior knowledge of the malware at hand, oftentimes not recognizing malware that has been modified to bypass detection tools. By dissecting code to a granular level, MLSTONES can see through obfuscation techniques and identify similarities in the basic building blocks of malware. Not only does this approach make it difficult for attackers to evolve malware in a way that evades detection, it also gives cyber analysts a head start in understanding the intent of the malware.

Rapid Convective Growth Detector

Rapid Convective Growth Detector (RCGD) technology, from MIT Lincoln Laboratory and the Federal Aviation Administration, enables national-scale detection of hazardous storm growth at a rate 10 times faster than comparable weather systems. The system uses tilt-by-tilt radar processing, storm tracking and motion compensation, time alignment, trending analysis, and mosaicking to generate specific hazard avoidance regions updated every 30 seconds. Data from RGCD is generated at a pace sufficient to support short-term tactical warnings to air traffic controllers and flight crews to enable them to avoid rapidly growing storms that may not yet be visible to weather radars onboard aircraft. The technology has been validated through a series of test exercises using extensive actual and synthetically generated weather data and will be included as part of the FAA’s NextGen Weather System, which will be operational in 2021.

This article is from: