INNER SANCTUM VECTOR N360™|QUANTUM COMPUTERS

Page 1

INNER SANCTUM N360 ©™ VECTOR SPECIAL EDITION !
Joseph Reddix President, And CEO The Reddix Group (TRG) Dr. Hans C. Mumm QSA
QUANTUM COMPUTERS INDEFENSE
Dr. Merrick S. Watchorn Ingalls Industries (HHI)

QUANTUM COMPUTERS IN DEFENSE

CONCEPT OF THROUGH THE IN-MEMORY COMPUTING

JOE REDDIX THE REDDIX GROUP

!

There is national security risk in attempting to secure quantum computers using the current solidstate memory approach; we must begin to explore a new approach using in-line memory. This approach allows quantum computing to be introduced into the national security arena with no cyber security risks, no supply chain issues, an increase in performance, and an increase in natural langue abilities.

!
Joseph Reddix, CEO Dr. Hans C. Mumm, QSA, Dr. Merrick S. Watchorn, DMIST, QIS, QSA- Huntington Ingalls Industries (HHI)

Since 2013, the United States and its Allies have endured a constant, sustained effort to reduce national security, resiliency, confidence and undermine infrastructure found within Digital Warfare Strategies espoused by its enemies. This continuous strain has incurred a strategic financial, technical, and workforce debt not seen before in the Cyber domain. Cloud computing enabled distributed computing at an economically affordable scale to commerce and the first integration of Quantum and Cloud with its success. When cyber adversaries have access to the power of quantum computing, our modern cryptographic systems based on public keys will not stand up to the test (NIST, 2021). The White House led an effort to establish the National Quantum Initiative (NQI) Act, which became Public Law 115-368 in December 2018 to accelerate American leadership in quantum information science and technology (NTSP, 2021). An area of exploration in Quantum is light and solid memory allocations and its ability to influence:

1) Microwave storage,

2) Light learning microwave conversion,

3) Orbital angular momentum,

4) Gradient Echo Memory,

5) Electromagnetically induced transparency, and

6) In-Memory Computing (IMC)

IMC will provide a high order of magnitude for memory efficiency and enables Multi-Level Security (MLS) principles associated with Zero Trust Architectures (ZTA) as mandated by current federal regulations and standards.

What is IMC?

IMC is the integration of In-Line Memory and Quantum Computing Architectures forming the heart of our proposed concept and implementation strategies. By combining both classical and quantum In-Memory management software with classical and quantum computing hardware thereby producing 40X plus overall performance. This same IMC structure can be applied to High Performance Hardware Computing (HPC).

!
Joseph Reddix
!
Dr. Hans C. Mumm Dr. Merrick S. Watchorn
!
!

In addition: Quantum Information Systems (QIS) is poised to fundamentally impact the way our national security institutions conduct their way of business.

Data protection, risk modeling, portfolio management, robotic process automation, digital labor, natural language processing, machine learning, auditing in the cloud, blockchain, quantum computing, and deep learning may look very different in a post-quantum world. Since 2013, the United States Government has issued numerous guidelines, policies, and Executive Orders (EO) to begin building a postquantum resistance environment required for resiliency: EO 13636 (2013) Improving Critical Infrastructure Cybersecurity, 13800 (2017) Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, 14028 (2021) Improving the Nation's Cybersecurity. These Executive Orders provide a roadmap for investment strategies for national research organizations tasked with implementation and execution.

Purposed Concepts

IMC advancements in technology enhance the ability to provide 40X over that achievable on soon-to-be deployed Exascale computing platforms.

These computational capabilities of current processors and accelerators can effectively meet NNSA/ASC mission drivers. IMC provides memory technology improvements (e.g., density, capacity, bandwidth, access speed, access granularity, and power).

This includes having a module or package improvements and/or integration and advanced cybersecurity architectures. IMC enhances near-memory data marshaling and/or compute capabilities, emphasizing benefits across multiple NNSA application domains.

!

IMC stores a quantum state for later retrieval. These states hold useful computational information known as qubits. Unlike the classical memory of everyday computers, the states stored in quantum memory can be in a quantum superposition, giving much more practical flexibility in quantum algorithms than classical information storage.

IMC is essential for developing many devices in quantum information processing, including a synchronization tool that can match the various processes in a quantum computer, a quantum gate that maintains the identity of any state, and a mechanism for converting predetermined into ondemand photons. IMC can be used in many aspects, such as quantum computing and quantum communication. Continuous research and experiments have enabled quantum memory to realize the storage of qubits.

IMC is an essential component of quantum information processing applications such as quantum networks, quantum repeater, linear optical quantum computation, or long-distance quantum communication.

!

From a cybersecurity perspective, the magic of qubits is that if a hacker tries to observe them in transit, their fragile quantum states shatter. This means hackers can't tamper with network data without leaving a trace. Now, many companies are taking advantage of this feature to create networks that transmit highly sensitive data. In theory, these networks are secure.

This value represents the highest efficiency to date for the storage and readout of optical qubits in any physical platform and is more than double the previously reported values. It also outperforms the critical 50% threshold required to beat the no-cloning limit without postselection.

Quantum light-matter interfaces are at the heart of photonic quantum technologies. IMC for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of IMC to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics.

IMC has demonstrated a highly efficient memory for optical qubits by successfully operating a large elongated atomic ensemble in a dualrail configuration. This combination enables the reversible mapping of arbitrary polarization states with fidelities well above the classical benchmark and overall storage and retrieval efficiency close to 70%.

Besides the network architecture scalability and potential loss-tolerant schemes, the achieved efficiency opens the way to tests of advanced quantum networking tasks where the storage node efficiency plays a critical role, such as in certification protocols or unforgeable quantum money. Moreover, the designed platform is directly compatible with recent works based on spatially structured photons and multiple degrees of freedom storage; it can now yield very efficient realizations to boost high-capacity network channels.

Key Areas to Consider- National Security Vulnerabilities Exposed

!
!

Technology flourishes as free markets expand. Quantum computers will be the next productivity accelerator expanding the market's breadth and depth. As a result, policy issues become critical as the productivity of the information revolution stabilizes to a more constant growth rate.

The threat of unchecked technology and the ability to weaponize quantum computing continue to evolve.

Quantum solid-state memory advancements have offered more sophisticated capabilities with costeffective designs resulting in a reduced entry-level for consumers, businesses, enemy states, and terrorist organizations. As a result, quantum computers reduced barrier to entry is now a national security risk.

Clear national policies, laws, and governance are required as the danger and the need for a unified global response is becoming undeniable.

Currently, there is not a single sensor type, defense posture, or reliable countermeasure in place to stop any of these evolving threats from quantum computers.

The US does not have the policies, governance, or identification capable of offering reliable defenses against quantum computing; however, inmemory quantum computing can offer this ability to track, audit, and continuously monitor quantum systems.

Technology Areas of Interest

IMC is a promising approach for energy cost reduction due to data movement between memory and processor for running data-intensive deep learning applications on computing systems. Together with Binary Neural Network (BNN), IMC provides a viable solution for running deep neural networks data provided using edge devices with stringent memory and energy constraints.

With this approach, the processing can be offloaded to a more tactical edge solution which has the potential to provide better memory load balancing.

!

Clearly, Edge Computing is a way of life.

The recent success of deep learning in various domain-specific tasks involving object recognition, classification, and decision making is primarily driven by the architectural innovations of neural networks commonly known as Deep Convolutional Neural Networks (DCNNs). However, the high performance of DCNNs comes with substantial memory requirements for storing the network parameters for computation which constrains the deployment of such networks at edge devices for mobile applications. Moreover, when implemented in conventional von-Neumann hardware architecture, data-intensive deep learning applications require frequent memory access and data movement between memory and computation units, which can over-weigh the energy cost of computation and add significant energy overhead.

!
!
!

Furthermore:

• As data continues to permeate organizations at all levels at an everincreasing pace, organizations are challenged to handle the volume, velocity, and integrity of data as leadership strives to derive value from the data and drive business impact in real-time.

• Generating data intelligence requires the analysis of vast quantities of diverse data, either structured or unstructured and generated by humans or by machines, to uncover patterns and pursue breakthrough ideas.

• Artificial Intelligence (AI) is the latest stage of analytics, with AI training and inferencing becoming an integrated aspect of organizations' capabilities.

Unlocking the intelligence from data in real-time requires a modern application and data management environment. The IT infrastructure that hosts these applications and data management platforms serves as a critical foundation layer. The move to a real-time enterprise includes:

• Converging business-centric transaction processing and data-centric analytics systems to increase the quality and timeliness of insight (i.e., systems of record, engagement, and insight)

!

• Deploying In-Line Memory databases for low-latency response times as part of the application environment

• Infusing data analytics platforms with high-performance technologies to optimize application performance for large data sets

• Using a highly available and secure conduit for data movement between the various application tiers

• Implementing an appropriate data persistence tier that can support the storing, securing, and fast access of rapidly changing data sets

• Preparing for increasingly AIinfused applications and the related data movement and data processing requirements, for example, by introducing hardware acceleration

When web infrastructure, collaborative workloads, and application development burst onto the scene some 20 years ago, scaling horizontally became de rigueur.

Next, virtualization and cloudification caused the scale-out paradigm to become dominant. Scale-up systems became somewhat misunderstood along the way, even as they were aggressively being modernized.

Business processing, decision support, and analytics have never fared well on horizontally scaled environments. Instead, these are demanding workloads that require maximum resources to process multiple terabytes of data.

And when these resources - lots of processors that are close together and lots of flat RAM that is globally addressable for In-Line Memory computing - are packaged in a single system, the benefits, compared with scale-out environments, are significant.

!

The large memory footprints of scale-up systems allow for large and growing databases to be completely held in memory, eliminating the latencies of disk access. Latency is also reduced because interconnects allow for dynamically scaling rather than the complex and extensive networks needed to connect nodes in a scale-out environment. As a result, power consumption and cooling costs are significantly lower, as are software licensing fees.

Scale-up systems are also suitable for consolidation projects as they are easier to implement and more efficient to manage and operate than scale-out clusters. They take up a smaller footprint and provide greater reliability and availability.

In terms of economics, many of today's scale-up systems are nothing like legacy scale-up systems they leverage the same standardized components (memory, processors, and storage) that scale-out servers are built with, albeit with their own specifications, rather than the proprietary components from the past. The idea that scaleup systems are too costly is simply no longer valid, especially if they are available on a consumption model basis.

As businesses become more and more data-driven, they quickly realize that to stay competitive, they need a solution that not only provides advanced capabilities for performing highly complex technical computations but can support deep data collection and predictive analysis at the same time.

!

Traditionally, these two domains

High-Performance Computing (HPC) and Artificial Intelligence (AI)— existed as separate environments, each with its unique hardware, software, storage, and networking requirements. HPC usually involves a significant amount of computing power employing state-of-the-art parallel processing techniques. On the other hand, AI (including Machine Learning (ML) and Deep Learning (DL)) employs iterative algorithms to find insights hidden in oceans of data collected over time.

Dr. Stratos Idreos of Harvard University and his colleagues are developing a new kind of system that relies on mathematics, engineering, and machine learning to automatically discover and design the core building blocks of data systems, known as data structures, in a way that's as near perfect a fit as possible for the task at hand.

Not too long ago, building such a converged HPC/AI environment would require spending a lot of money on proprietary systems and software with the hope that it would scale as business demands changed. By relying on open-source software and the latest highperformance/low-cost system architectures, it is possible to build scalable hybrid on-premises solutions that satisfy the needs of converged HPC/AI workloads while being robust and easily manageable.

Today, business and science entities constantly generate new contexts for data management.

Given distill fundamental principles, organizational schemas must be mapped to the appropriate context. Building and testing all these data structures one by one would take an insurmountable amount of time and expense. The authors have sought ways to evaluate each possible design's utility without implementing and trying it.

The most productive method would be is to use a Data Structure Alchemist, which assesses each design principle based on the context or the data and hardware where the desired data structure is expected to work. Then the system can synthesize complex design behavior based on individual principles and deploy a machine learning-based search algorithm that continuously learns and becomes better at designing data structures.

!

Generally used to help recognize patterns, machine learning can sort through the vast pool of solutions, seeking critical features found in existing data management systems.

The idea is to minimize or even eliminate the human effort needed to design new data structures and consider the data workload, hardware compatibility, a client's budget restrictions, and how quickly data should be searched

and retrieved. And the more it's used to design solutions for different contexts, the more it learns and can recognize patterns for which designs work best under which conditions, improving the method's speed over time. Once a close-to-optimal solution is found, the system automatically codes the target design to deliver the desired data structure readymade for users or other systems.

STORAGE DATA MANAGEMENT

"In 2023, look for more usage of object stores, for storing structured and unstructured data, files, blocks, objects all in one repository. AI's large data sets need proximity to where processing is happening. So, rather than viewing it as a large cold store, object stores are going to be able to do AI-type workloads, which means large sets of data can be accessed with high bandwidth and low latency. As a result of the rise of data-intensive AI workloads, we'll see the need for high performance NVMe storage also increase since this high-performing object-store resides on flash-based storage, as opposed to the traditional, cold storage”.

!

The Open Fabrics Alliance (OFA) is an open source-based organization whose mission is to develop and promote software that enables maximum application efficiency by delivering wire speed messaging, ultra-low latencies, and maximum bandwidth directly to applications with minimal CPU overhead. The OFA develops, tests, supports and distributes open-source Advanced Network Software – a suite of highperformance APIs and associated software for current and future HPC, cloud, and enterprise data centers.

The Gen-Z Consortium is an industry organization developing an opensystems fabric-based architecture

designed to provide high-speed, lowlatency, secure access to data and devices. The Consortium is tasked with developing specifications to define memory-centric fabric technology. In addition, Gen-Z has established key partnerships with multiple open standards-based organizations as part of an ongoing commitment to innovation and ecosystem advancement.

Are the authors recommending leveraging these two organizations? How? This is the first time memorycentric fabric tech has been mentioned. This needs to be expanded on why its critical to the effort.

HOW DO THE BASIC PIECES DETERMINE WHAT CAN BE BUILT?

!

The efficiency of a given solution depends on the pieces used to build it. For example, simplifying a computer's building blocks can reduce the number of steps to perform the same operation a necessity as quantum computing becomes more advanced.

Quantum computing's basic gatesets often yield more complex circuits compared to other gate choices. As a result, the addition of relatively few basic operations can result in powerful new computing paradigms and increasingly flexible and efficient solutions.

SUPER CONTAINERS

Container computing has revolutionized how many industries and enterprises develop and deploy software and services. Recently, this model has gained traction in the High-Performance Computing (HPC) community through enabling technologies including Charliecloud, Shifter, Singularity, and Docker. In this same trend, container-based computing paradigms have gained significant interest within the DOE/NNSA Exascale Computing Project (ECP).

While containers provide greater software flexibility, reliability, ease of deployment, and portability for users, there are still several challenges for Exascale in this area.

The goal of the ECP Supercomputing Containers Project (called Supercontainers) is to use a multilevel approach to accelerate the adoption of container technologies for Exascale, ensuring that HPC container runtimes will be scalable, interoperable, and integrated into Exascale supercomputing across DOE. The core components of the Supercontainer project focus on foundational system software research needed for ensuring containers can be deployed at scale, enhanced user and developer support for enabling ECP Application Development (AD) and Software Technology (ST) projects looking to utilize containers, validated container runtime interoperability, and both vendor and E6 facilities system software integration with containers.

!

JOE REDDIX, PRESIDENT, AND CEO

AI is and will always be a moving target that requires a highly diverse workforce to truly make it work for the entire nation. Integrating elements of computer science with advanced technology research in the development of ethical AI practices, the articulation of performance standards for workforce education, and providing the sustainability for these aspects is a strategic national goal that I can proudly get behind and support whole-heartedly”.

!
!

Expert in Quantum Computing (QC) and and Artificial Intelligence (AI) with a secure background in Information Technology (IT). Reddix is the creator of “Anywhere, Anytime, for any Reason” Seal Team approach to problem solving. Advise and assists President Biden’s Administration in a comprehensive analysis of issues and concerns currently constraining our Nation’s Artificial Intelligence capabilities and threat assessments.Founder of The Reddix Group (TRG), a company specializing in Master Systems Integration (MSI) headquartered in Hanover, Maryland.

Information derived from using AI and QC does two primary things: Create or Destroy and if developed and pursued for the wrong reasons, the outcomes will forever change and alter our basic concepts of participatory democracy.The development of ethical and responsible uses for AI and QC technologies is a serious undertaking and coincidentally a team sport where there is no bench or time-outs.

The Office of the Undersecretary of Defense Research and Engineering invited TRG to take part in three workshops to help ideate the future DoD marketplace for

Everyone must play and how well Americans compete in the global marketplace will forever determine our place and assigned role in the community of nations.

Discussions centered around issues related to the AI Research and Development Strategic Plan and exploring ways to engage and approaches to promote and encourage more industry and academia interaction with the federal government. Key performance areas included coordination, management, opportunities accountability, intellectual property rights, and ethical implementations of AI. The aim of the three workshops was to articulate, develop, and rapidly deploy critically needed AI and Machine Learning (ML) solutions into the federal workplace.

!

(TRG)

Joseph Reddix, President, And CEO The Reddix Group Dr. Merrick S. Watchorn, DMIST, QIS, QSA- Huntington Ingalls Industries (HHI)
!
Dr. Hans C. Mumm, QSA

LINDA RESTREPO

Linda Restrepo is Director of Education and Innovation, Human Health Education and Research Foundation. She is a recognized Women in Technology Leader, Cybersecurity and Artificial Intelligence.

Restrepo's expertise also includes Exponential Technologies, Computer Algorithms, Research, Implementation Management of Complex Humanmachine Systems. Global Economic Impacts Research. Restrepo is President of a global government and military defense multidisciplinary research and strategic development firm.

She has directed Corporate Technology Commercialization through the US National Laboratories. Emerging Infectious Diseases, Restrepo is also the Chief Executive Officer of Professional Global Outreach. Restrepo has advanced degrees from The University of Texas and New Mexico State University.

!
Quantum technology is a place where everything goes awry, “universal” laws don’t apply and nothing makes sense.

TECHNOLOGY IN THE MAKING

INNER SANCTUM VECTOR N360

™ ©

TECHNOLOGY IN THE MAKING

DISCLAIMER: This Magazine is designed to provide information, entertainment and motivation to our readers. It does not render any type of medical, political, cybersecurity, computer programming, defense strategy, ethical, legal or any other type of professional advice. It is not intended to, neither should it be construed as a comprehensive evaluation of any topic. The content of this Presentation is the sole expression and opinion of the authors. No warranties or guarantees are expressed or implied by the authors or the Editor. Neither the authors nor the Editor are liable for any physical, psychological, emotional, financial, or commercial damages, including, but not limited to, special, incidental, consequential or other damages. You are responsible for your own choices, actions, and results.

!

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.