​​Data and Machines of the Future

As we move toward our future, we increasingly notice two concepts that have always been at odds: data and computing power.

As we move toward our future, we increasingly notice two concepts that have always been at odds: data and computing power. This rift runs as follows: we have more data than we can process, while much of that data remains subpar for processing. 

Data and computing power have been such that the data we have has always been more than the data we can process, and the data we have is not always the best data to be processing. We are reaching the point where these two issues are starting to blur.

First, we are creating computers that have the ability to process the vast amounts of data that we are now creating. Second, we are creating synthetic data that may not be “real.” However, if it’s “authentic,” the users may prefer it. Let’s discuss these two topics and how they will interact in the future.  

Rise of the Machines

A new class of computers is emerging that stretches the boundaries of problems that they have the capability to solve. These new devices from three defined areas are pushing aside the limits of Moore’s Law and creating a new computing capability curve. 

Companies and industries have always been defined by their limitations, the currently unsolvable problems. However, these new machines may help companies solve and move beyond the presently unsolvable.

These ongoing challenges define the boundaries of companies and their core products, services, and overall strategies at any time. For decades, the financial services industry has operated under the assumption that predicting the movement of the stock market and accurately modeling market risk is either hard or impossible to do, but it may not be in the near future.  

When combined, there are emerging technologies that can potentially make these core challenges achievable. With quantum computing as the next level of problem-solving, combined with high-performance computers (HPCs) or massive parallel processing supercomputers (MPPSCs), the ability to use never-before-seen swaths of data becomes possible. 

As business leaders, we must create partnerships and inroads to understanding the latest technological developments in the computing field and in our industry at large. This creative process includes experimentation and the design of a skills pipeline that will lead to future success.  

New Data Types

With the increases in chatbots, augmented reality (AR), and synthetic data (including deep fake audio, images, and video), we are forced to evaluate what is “real” and what is not. When we see news of the latest global issue, we want to know that it is real, but do we care if the newest advertisement for our favorite snack is? 

We may even prefer the unreal. Say we are discussing a sensitive health issue with a synthetic (i.e. AR) nurse or we are training an AI using synthesized data that is designed to remove historical discrimination–unreal may be the preference. 

As technology progresses, we will shift from a desire for the real to a desire for the authentic, and authenticity is defined by four foundational measures:

1.     Provenance. What is the source of the data?

2.     Policy. How has the data been restricted?

3.     People. Who is responsible for the data?

4.     Purpose. What is the data trying to accomplish?

Synthetic data aims to correct data bias, protect data privacy, and make AI algorithms more fair and secure. Synthetic content helps design more seamless experiences and provide novel interactions with AI that saves on time, energy, and reduced costs. However, the use of synthetic data will be complex and controversial.  

Data and Computing

High performance is a growing necessity. IDC reported that 64.2 zettabytes (ZB) of data was created or replicated in 2020, and this is expected to triple to 180ZB by 2025. Only 10.6% of the 2020 data was useful for analysis; and of that, only 44% was used.  

The answer to dealing with this massive issue is through high-performance computing (HPC), and while the field of HPC is not new, the computing potential has expanded. The smartphones of today contain the processing power of supercomputers three decades ago. 

Now, GPUS, ASICs, and other purpose-built processors, like D1 Dojo chips specifically for computer vision neural networks, and which will be the foundation of autonomous driving tech, are pushing HPC capability to new levels.

The Unreal World

Another issue for the future is the unreal. Dealing with a call center that has a bot that does not understand your request is maddening. But the potential for AI and its use is already becoming indispensable in business. It constantly improves, and what was once a “differentiator” for businesses has now become a necessity

Synthetic data is being used for AI model training in cases where real-world data cannot apply. This “realish” yet unreal data can be shared, protecting confidentiality and privacy while maintaining statistical properties. Further, synthetic data can counter human-born biases to increase diversity.

However, synthetic data comes with significant challenges. The world of deep fakes and disinformation is causing predictable damage, and the use of AI algorithms in social media creates echo chambers, filter bubbles, and algorithmic confounding that can reinforce false narratives. 

New Computing Technologies

Quantum Computing

While HPCs and supercomputers are able to process more data, they’re simply new versions of the same old stuff. 

The next generation of computer evolution will likely be when quantum computers begin to solve problems that we consider obdurate. Quantum research is still in its infancy but is likely to follow an exponential curve. 

The estimated number of qubits needed to crack the current level of cybersecurity is several thousand, and the devices that are being designed by IBM and Google have reached an announced 127 while others have claimed to reach 256 qubits. However, this is up from 53 for IBM and 56 for Google in 2019. 

A doubling every two years sounds like Moore’s law. However, Moore’s law is not the same for quantum computing. Qubits’ property of entanglement means that by adding one more qubit to a quantum system, you double the information the quantum system can compute. The move from 53 to 127 means computing power has doubled 74 times in just three years.  

Mimicking and Using Nature

The other technology that is reshaping computing is taking lessons from nature. Biology-inspired computing takes its ideas from a 3.7-billion-year-old system. There are two subclasses of biocomputing:

1.     Biomimicry, or computing systems that draw their inspiration from biological processes.

2.     Biocomputing, or systems that use biological processes to conduct computational functions.  

Biomimicry systems have been used in chip architectures and data science algorithms.  However, we are now beginning to see machines that are not only mimicking biological operations but are leveraging biological operations and processes. 

Data storage is a biocomputing darling for a good reason. Based on natural DNA found on Earth, one estimate predicts that an exabyte of data (1 million Terabytes) could be stored in one cubic centimeter of space and has the potential to persist for over 700,000 years.  

Moving Forward

How do businesses incorporate new forms of data and new ways of computing into practice? 

The first action is to begin evaluating how these different technologies will shape the industry and your operations. What are the problems that are considered a cost of doing business, and what would change if these problems could be solved? How can synthetic data be used to improve your current business functions, and what things need to be looked out for that could have a negative impact? What kind of machines could affect your business first?

Those who desire to take an active role in shaping the future should consider what hardware can be used to solve the currently unsolvable.  

No matter the industry, the critical step of forging partnerships is essential. Most businesses have skills and capabilities that they can gain from such partnerships, and many industry problems require comprehensive scale collaboration.  

Alliances and partnerships formed today will be the outliers of the industry tomorrow.

Closing Thoughts

We have always been defined by our unanswerable questions, and the advent of computers has helped us to solve grand challenges. We are also facing a synthetic, unreal world that is intended to improve our lives, but–depending on the user’s intent, such data and its progeny can be a tool of malicious intent.

Both of these concepts are at the point where business leaders must consider them no longer abstract. They’re rapidly improving, and their impacts on industries will be profound in the coming decade. The unsolvable will be a thing of the past, and what we believe is real will come into question.  

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

design and development by covio.fr