What is Somnium Space?

Somnium Space started in 2017 and is by area one of the largest virtual blockchain worlds (VBWs). Like with other VBWs, on Somnium Space, users can create fully customizable environments and programmable independent VR experiences within its larger connected world. These environments are possible through Somnium’s four key offerings:

  1. An SDK to create avatars and property
  2. An NFT marketplace where game-based assets can be traded
  3. A module for building environments and structures within them
  4. Virtual reality experiences

Somnium Space allows creators to build and monetize VR experiences for their users that are from their own imaginations while also integrating blockchain technology. This quality means that the creators are the designers and main recipients of value. Let’s take a deeper look into this second-largest take on the metaverse.  

Courtesy of Somnium Space

Somnium Space Basics

Somnium Space is a VBW built on the Ethereum blockchain. Somnium is an open-source platform with an immersive VR world that allows users to buy digital real estate, including land, homes, buildings, and several other in-game assets that have value. Somnium’s immersive dynamics allow its players to build and monetize their environments or visit other users’ creations like swimming pools, museums, restaurants, or nightlife and casinos. The possibilities for building within Somnium are nearly limitless, allowing for the construction of unique experiences, worlds, and assets. 

While traditional multiplayer VR games have their users divided into mirrored instance rooms via sub-servers, Somnium hosts all the players in a vast interconnected world. Within its broader VR universe, users can create Somnium environments, customized and programmable independent VR experiences. 

What’s more, the NFT assets from within Somnium are compatible with other metaverses and platforms throughout the Ethereum blockchain (and potentially other blockchains) ecosystem.  

Somnium has its four main elements that are listed in the introduction, and it has deeply incorporated NFTs into its technology, allowing players to bring NFTs from outside its universe (from other parts of the decentralized ecosystem) inside. 

Somnium’s Tokenomics

With traditional gaming, the users generate value, which goes to the developer. Players will purchase the game, or with freemium games, they will buy upgrades, access, and customizations a la carte. 

They can’t generally take in-game assets out of the game. For example, if the player buys upgraded armor, unlocks a new vehicle, or gains access to a new world, that value remains in-game only. You cannot take the armor or vehicle to another game nor unlock the asset’s value for use in another platform.

However, in Somnium and other blockchain-based games and metaverses, the opposite is true, and assets are valued with tokenization, increasing the benefit to their owner. Being an Ethereum application, Somnium allows for tokenizing in-game assets such as real estate, avatars, wearables, and collectibles, decoupling those assets from Somnium, the company. Allowing for player-generated value allows players to access the token value created in Somnium elsewhere in the broader crypto and token economy.  

The Somnium economy is based on three token assets:

Somnium’s Cube Token (CUBE)

The CUBE is an ERC-20 (Ethereum) token that works as Somnium’s native utility token. The CUBE streamlines in-game player transactions and is most similar to tokens bought at an arcade. With an Ethereum wallet, players can hold ETH, CUBE, and NFTs (in ERC-721 form). 

CUBE is the bridge between assets for in-game commerce. As Somnium’s universe expands, CUBE will develop in-world utility, allowing players to exist in their VR world. 

CUBE’s price, courtesy of Coinmarketcap.com

Somnium’s Land Parcels (PARCELs)

Somnium Space had two “Initial Land Offerings” (ILOs) to issue PARCELs to stakeholders via the OpenSea NFT marketplace. Players who want to build their own Somnium worlds must obtain at least one land PARCEL. Players can also put any NFT on their PARCEL and explore the PARCEL in VR. 

Somnium map, courtesy of Somnium Space

Somnium’s Avatars

At the end of 2020, the Somnium team expanded the CUBE’s utility with AVATAR tokenization. Players can mint full-body VR avatars onto the blockchain via CUBE. Players purchase an AVATAR with CUBE, and it’s part of their inventory. AVATARs are compatible with other virtual worlds across many digital platforms.  

CUBE tokens can be used to purchase another player’s avatar in NFT form. The buyer’s CUBE is exchanged for the NFT AVATAR of the seller. The ability to create avatars within Somnium exemplifies CUBE’s growing utility.  

Somnium’s Karma Levels

The Karma level indicates how VR citizens perceive each other. Somnium will calculate the Karma level of a player with three main metrics:

  1. Rating: how other virtual citizens perceive them based on on-platform interactions. 
  2. Engagement: each player’s economic activity value, referring to a score including their time spent gaming, land ownership, and world discovery rate. 
  3. Other factors: these include building, public participation, and event organizing.

Players will earn CUBE based on their Karma level, and those that act as instructors or guilds, providing value to the community, will too.  

User Opportunities

Somnium offers its Software Development Kit (SDK), Unity, to create customization and personalization for the development of property and avatars, with the avatars interoperable with other platforms and virtual worlds. 

The SDK includes a builder mode so that complex and intricate structures can be designed. Once developed, these can be listed as assets on the NFT marketplace and become part of the metaverse.  

Builder mode, courtesy of Somnium Space

Somnium is now interoperable with Polygon so users can transfer their NFTs in and out of Somnium, saving on fees. These NFTs can be any of the following:

  • Cars or other vehicles
  • Unique avatar wearables 
  • Event tickets for entry to a parcel
  • Teleportation hubs to travel across the metaverse
  • Treasure hunts leading to CUBEs

There will be a maximum of 100 million CUBE tokens minted, limiting the availability and generating value for holders. The fees charged by Somnium are minimal, making it easier to gain from a democratized metaverse economy.  

Closing Thoughts

VR platforms, such as VRChat, AltSpace, and Rumii, are popular platforms for distanced social interaction and corporate meetings. Concurrently, Ethereum-based blockchain metaverses like Somnium Space have built multiplayer ecosystems, unlocking value in a novel way. The true idea of the metaverse is an entirely decentralized world where we interact using blockchain technology.  

By integrating blockchain, Somnium users can create experiences from their imaginations and monetize these VR experiences in a way other platforms do not allow. While the play may be virtual in Somnium’s version of the metaverse, it has created a real economy that moves beyond the space that Somnium inhabits and potentially blurs the lines into the augmented and real worlds.  

Somnium could be a hit if it is able to attract the right users that will create exciting experiences that others will be enticed to partake in and, more importantly, pay for. Its potential success is hard to determine. It relies on users for content creation, which is a dangerous proposition, and while it allows creators to gain, it’s always taking its cut.

Somnium has yet to gain a significant following, even though by digital area, it has the second largest metaverse environment, behind Decentraland. At the end of 2021, Decentraland hosted 300,000 monthly users, while in the same period YouTube had 2.6 billion monthly users. Immersive and original content is vital to Somnium’s success. Let’s see what the inevitable future of VR brings. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AI’s Transformation of Oncology

Artificial intelligence (AI) is constantly reshaping our lives. It saves companies and us time and money, but it has applications that can be applied to medicine, potentially saving our lives. 

We can understand AI’s evolution and achievements to model future developmental strategies. One of AI’s most significant medical impacts is already being seen in and will continue in oncology. 

AI has opened essential opportunities for cancer patient management and is being applied to aid in the fight against cancer on several fronts. We will look into these and see where AI can best aid doctors and patients in the future. 

Where Did AI Come From?

Alen Turing first conceived the idea of computers mimicking critical thinking and intelligent behavior in 1950, and by 1956 John McCarthy came up with the term Artificial Intelligence (AI). 

AI started as a simple set of “if A then B” computing rules but has advanced dramatically in the years since, comprising complex multi-faceted algorithms modeled after and performing similar functions to the human brain.

AI and Oncology

AI has now taken hold in so many aspects of our lives that we often do not even realize it. Yet, it remains an emerging and evolving model that benefits different scientific fields, including a pathway of aid to those who manage cancer patients.  

AI has a specific task that it excels at. It is especially good at recognizing patterns and interactions after being given sufficient training samples. It takes the training data to develop a representative model and uses that model to process and aid decision-making in a specific field

When applied to precision oncology, AI can reshape the existing processes. It can integrate a large amount of data obtained by multi-omics analysis. This integration is possible because of advances in high-performance computing and several novel deep-learning strategies. 

Notably, applications of AI are constantly expanding in cancer screening and detection, diagnosis, and classification. AI is also aiding in the characterization of cancer genomics and the analysis of the tumor microenvironment, as well as the assessment of biomarkers for prognostic and predictive purposes. AI has also been applied to follow-up care strategies and drug discovery.  

Machine Learning and Deep Learning

To better understand the current and future roles of AI, two essential terms fall under the AI umbrella that must be clearly defined: machine learning and deep learning.

Machine Learning

Machine learning is a general concept that indicates the ability of a machine (a computer) to learn and therefore improve patterns and models of analysis.  

Deep Learning

On the other hand, deep learning is a machine learning method that utilizes algorithmic systems that mimic a system of biological neurons called deep networks. When finalized, these deep networks have high predictive performance.  

Both machine and deep learning are central to the AI management of cancer patients.  

Current Applications of AI in Oncology

To understand the roles and potential of AI in managing cancer patients and show where the future uses of AI can lead, here are some of the current applications of AI in oncology.  

With the below charts, “a” refers to oncology and related fields and “b” to types of cancers for diagnosis. +

Courtesy of the British Journal of Cancer; a. oncology and related fields: cancer radiology 54.9%, pathology 19.7%, radiation oncology 8.5%, gastroenterology 8.5%, clinical oncology 7.0%, and gynecology 1.4% b. tumor types: general cancers 33.8%, breast cancer 31.0%, lung cancer 8.5%, prostate cancer 8.5%, colorectal cancer 7.0%, and brain tumors 2.8%, others: 6 tumor types, 1.4% each.

The above graph, from the British Journal of Cancer, summarizes all FDA-approved artificial intelligence-based devices for oncology and related specialties. The research found that 71 devices have been approved. 

As we can see, most of these are for cancer radiology, which makes us correctly assume that it is for detecting cancer through various radiological scans. According to the researchers, of the approved devices, the vast majority (>80%) are related to the complicated area of cancer diagnostics.

Courtesy of cancer.gov

The image above shows a deep learning algorithm trained to analyze MRI images and predict the presence of an IDH1 gene mutation in brain tumors.

Concerning different tumor types that AI-enhanced devices can investigate, most devices are being applied to a broad spectrum of solid malignancies defined as cancer in general (33.8%). However, the specific tumor that counts for the most significant number of AI devices is breast cancer (31.0%), followed by lung and prostate cancer (both 8.5%), colorectal cancer (7.0%), brain tumors (2.8%) and six other types (1.4% each). 

Moving Forward with AI

From its origin, AI has shown its capabilities in nearly all scientific branches and continues to possess impressive future growth potential in oncology.  

The devices that have already been approved are not conceived as a substitution for classical oncological analysis and diagnosis but as an integrative tool for exceptional cases and improving the management of cancer patients. 

A cancer diagnosis has classically represented a starting point from which appropriate therapeutic and disease management approaches are designed. AI-based diagnosis is a step forward and will continue to be an essential focus in ongoing and future development. However, it will likely be expanded to other vital areas, such as drug discovery, drug delivery, therapy administration, and treatment follow-up strategies.

Current cancer types with a specific AI focus (breast, lung, and prostate cancer) are all high in incidence. This focus means that other tumor types have the opportunity for AI diagnosis and treatment improvements, including rare cancers that still lack standardized approaches. 

However, rare cancers will take longer to create large and reliable data sets. When grouped, rare cancers are one of the essential categories in precision oncology, and this group will become a growing focus for AI.  

With the positive results that have already been seen with AI in oncology, AI should be allowed to expand its reach and provide warranted solutions to cancer-related questions that it has the potential to resolve. If given this opportunity, AI could be harnessed to become the next step in a cancer treatment revolution.  

Closing Thoughts

Artificial intelligence (AI) is reshaping many fields, including medicine and the entire landscape of oncology. AI brings to oncology several new opportunities for improving the management of cancer patients. 

It has already proven its abilities in diagnosis, as seen by the number of devices in practice and approved by the FDA. The focus of AI has been on the cancers with the highest incidence, but rare cancers amount to a massive avenue of potential when grouped.  

The next stage will be to create multidisciplinary platforms that use AI to fight all cancers, including rare tumors. We are at the beginning of the oncology AI revolution. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Brain-Computer Interfaces

Brain-computer interfaces are devices that allow people to control machines with their thoughts. This technology has been the stuff of science fiction and even children’s games for years. 

Mindflex game by Mattell

On the more advanced level, brain-computer technology remains highly experimental but has vast possibilities. First to mind (no pun intended), would be to aid those with paralysis in creating electrical impulses that would let them regain control of their limbs. Second, the military would like to see its service members operating drones or missiles hands-free on the battlefield.  

There are also concerns raised when a direct connection is made between a machine and the brain. For example, such a connection could give users an unfair advantage, enhancing their physical or cognitive abilities. It also means hackers could steal data related to the user’s brain signals.  

With this article, we explore several opportunities and issues that are related to brain-computer interfaces.  

Why Do Brain-Computer Interfaces Matter?

Brain-computer interfaces allow their users to control machines with their thoughts. Such interfaces can aid people with disabilities, and they can enhance the interactions we have with computers. The current iterations of brain-computer interfaces are primarily experimental, but commercial applications are just beginning to appear. Questions about ethics, security, and equity remain to be addressed. 

What Are Brain-Computer Interfaces? 

A brain-computer interface enables the user to control an external device by way of their brain signals.  A current use of a BCI that has been under development is one that would allow patients with paralysis to spell words on a computer screen

Additional use cases include: a spinal cord injury patient regaining control of their upper body limbs, a BCI-controlled wheelchair, or a noninvasive BCI that would control robotic limbs and provide haptic feedback with touch sensations. All of this would allow patients to regain autonomy and independence.

Courtesy of Atom Touch

Beyond the use of BCIs for the disabled, the possibilities for BCIs that augment typical human capabilities are abundant. 

Neurable has taken a different route and has created headphones that can make you more focused, not requiring a user’s touch to control, but can work with a wink or nod and will be combined with VR for a better experience.

Courtesy of Neurable

How do BCIs Work?

Training

Generally, a new BCI user will go through an iterative training process. The user learns how to produce signals that the BCI will recognize, and then the BCI will take those signals and translate them for use by way of a machine learning algorithm. Machine learning is useful for correctly interpreting the user’s signals, as it can also be trained to provide better results for that user over time. 

Connection

BCIs will generally connect to the brain in two ways: through wearable or implanted devices. 

Implanted BCIs are often surgically attached directly to brain tissue, but Synchron has developed a catheter-delivered implant that taps into blood vessels in the chest to capture brain signals. The implants are more suitable for those with severe neuromuscular disorders and physical injuries where the cost-benefit is more favorable. 

A person with paralysis could regain precise control of a limb by using an implanted BCI device attached to specific neurons; any increase in function would be beneficial, but the more accurate, the better.  Implanted BCIs can measure signals directly from the brain, reducing interference from other body tissues. However, most implants will pose other risks, primarily surgical-related like infection and rejection. Some implanted devices can reduce these risks by placing the electrodes on the brain’s surface using a method called electrocorticography or ECoG.  

Courtesy of the Journal of Neurosurgery

Wearable BCIs, on the other hand, generally require a cap containing conductors which measure brain activity detectible on the scalp. The current generation of wearable BCIs is more limited, such as only for augmented and virtual reality, gaming, or controlling an industrial robot. 

Most wearable BCIs are using electroencephalography (EEG) with electrodes contacting the scalp to measure the brain’s electrical activity. A more recent and emerging wearable method incorporates functional near-infrared spectroscopy (fNIRS), where near-infrared light is shined through the skull to measure blood flow which, when interpreted, can indicate information like the user’s intentions. 

To enhance their usefulness, researchers are developing BCIs that utilize portable methods for data collection, including wireless EEGs. These advancements allow users to move freely. 

The History of BCIs

Most BCIs are still considered experimental. Researchers began testing wearable BCI tech in the early 1970s, and the first human-implanted BCI was Dobelle’s first prototype, implanted into “Jerry,” a man blinded in adulthood, in 1978. A BCI with 68 electrodes was implanted into Jerry’s visual cortex. The device succeeded in producing phosphenes, the sensation of “seeing” light.  

In the 21st century, BCI research increased significantly, with the publication of thousands of research papers. Among that was Tetraplegic Matt Nagle, who became the first person to control an artificial hand using a BCI in 2005. Nagle was part of Cyberkinetics Neurotechnology’s first nine-month human trial of their BrainGate chipimplant.  

Even with the advances, it is estimated that fewer than 40 people worldwide have implanted BCIs, and all of them are considered experimental. The market is still limited, and projections are that the total market will only reach $5.5 million by 2030. Two significant obstacles to BCI development are that each user generates their own brain signals and those signals are difficult to measure.  

The majority of BCI research has historically focused on biomedical applications, helping those with disabilities from injury, neurological disorder, or stroke. The first BCI device to receive Food and Drug Administration authorization was granted in April 2021. The device (IpsiHand) uses a wireless EEG headset to help stroke patients regain arm and hand control.  

Concerns With BCI

Legal and security implications of BCIs are the most common concerns held by BCI researchers. Because of the prevalence of cyberattacks already, there is an understandable concern of hacking or malware that could be used to intercept or alter brain signal data stored on a device like a smartphone.

The US Department of Commerce (DoC) is reviewing the security implications of exporting BCI technology. The concern is that foreign adversaries could gain an intelligence or military advantage. The DoC’s decision will affect how BCI technology is used and shared abroad.

Social and Ethical Concerns

Those in the field have also considered BCI’s social and ethical implications. The costs for wearable BCIs can range from hundreds even up to thousands of dollars, and this price would likely mean unequal access. 

Implanted BCIs cost much more. The training process for some types of BCIs is significant and could be a burden on users. It has been suggested that if the translations of BCI signals for speech are inaccurate, then great harm could result. 

The Opportunities of BCIs

The main opportunities that BCIs will initially provide are to help those paralyzed by injury or disorders to regain control of their bodies and communicate. This is already seen in the current research, but in the long term, this is only a steppingstone. 

The augmentation of human capability, be it on the battlefield, in aerospace, or in day-to-day life, is the longer-term goal. BCI robots could also aid humans with hazardous tasks or hazardous environments, such as radioactive materials, underground mining, or explosives removal.  

Finally, the field of brain research can be enhanced with a greater number of BCIs in use. Understanding the brain will be easier with more data, and researchers have even used a BCI to detect the emotions of people in minimally conscious or vegetative states.  

Closing Thoughts

BCIs will provide many who need them a new sense of autonomy and freedom they lack, but several questions remain as the technology progresses. Who will have access, and who will pay for these devices? Is there a need to regulate these devices as they begin to augment human capability, and who will do so? What applications would be considered unethical or controversial?  What steps are needed to mitigate information, privacy, security, and military threats?  

These questions have yet to be definitively answered—and they should be answered before the technology matures. The next step of BCIs will be information transfer in the opposite direction, like with Dobelle’s original light sensing “seeing” BCI of the 1970s, or computers telling humans what they see, think, and feel. This step will bring a whole new set of questions to answer.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

What Is Haptic Technology?

Haptic technology, or haptic touch, is going to be our engagement pathway for the future. Since the start of the Covid pandemic, we are working from home more often, and much of our lives are online. However, we do not have to worry about losing physical touch. 

Haptic technology offers its users a more connected experience, and this budding industry is beginning to make its mark on companies that will likely embrace this evolving tech in the future.  

Tactile feedback technologies have been around for decades. The original XBox controller would vibrate when you were taking damage from an adversary, and phones/pagers have had a vibrate function for decades. As haptic technologies advance, they’re fast becoming powerful tools for consumer engagement. 

We will explore haptic technology’s types, advantages, and use cases, including 3D Touch, showing how it can impact a business’s objectives and growth.  

Haptic Technology Explained

Haptic technology uses hardware and software to produce tactile sensations that stimulate the user’s sense of touch, to enhance their experience. For example, the most common applications are the haptic solutions found with phones and game controllers that vibrate. Yet vibrating devices are not the only type of haptic tactile feedback: they can also include things like heat and cold, air pressure, and sound waves.  

Haptic tech can also be known as kinaesthetic communication or 3D Touch, and this technology creates new experiences with motion, vibration, and similar forces. There are two terms within haptic technology that are similar but should be distinguished: haptics and haptic feedback. 

  • Haptics: the overarching term that is used to describe the science of haptic feedback and haptic technology, as well as the neuroscience and physiology of touch.  
  • Haptic feedback: the method by which haptic technologies communicate tactile information to the users.

Haptic Applications and Modalities                                     

Immersion is a haptic tech pioneer whose technology is in over 3 billion devices worldwide. They’re the ones that tell your steering wheel to vibrate when you get too close to a car in another lane. One study on haptics showed that 94% of participants could recall objects through touch alone.  

As the global user base of haptic tech grows, it will continue to expand into novel applications, improving the user’s experience.

The Four Haptic Modalities

Let’s introduce the four main haptic modalities: vibration, button stimulation, thermal stimulation, and kinesthetic. 

Vibration

The majority of haptic experiences possess a vibration-centric feedback focus. This includes technologies like eccentric rotating mass (ERM), or linear resonant actuators (LRA). Both work to create much of the vibration we experience with mobile or wearable devices. 

LRA and ERM from Precision Microdrives

Button Stimulation

Until recently, few of our touch screens offered the tactile feedback and versatility of mechanical buttons. Therefore, we expect simulated controls to be ever more popular, such as the newer offerings from Apple (“Force Touch” and Apple’s “Haptic Touch”) and Samsung (“One UI 4”). These virtual buttons can use both haptic and audio feedback to replace the feeling of a mechanical pressure plate when fingers press the screen.

Thermal Stimulation

Thermoelectric generators create temperature-based haptic experiences for users. This effect is accomplished through the manipulation of electric current flow between alternating conductors on a device (one warm and one cold). The user can then experience different perceived temperatures.  

Tegway is producing this technology for VR headsets and other applications to add to the experience.  

Source: Tegway

Kinesthetic

Kinesthetic devices are worn on the user’s body and provide the wearer with haptic feedback sensations of mass, movement, and shape. The Dexmo force feedback haptic glove exemplifies the potential growth avenue available in the kinesthetic modality.

Types of Haptic Systems

Three primary haptic system types are now being used across several industries: graspable, touchable, and wearable. 

Graspable

Graspable devices, such as joysticks, and steering wheels, can create kinesthetic feedback that informs our nerves, tendons, joints, and muscles. Other applications, such as human-controlled robotic operations, can utilize graspable haptic systems that will provide users with tactile movement, vibration, and resistance. This allows for more realistic operations of a remote robot or a system in a virtual environment. 

The military is already using graspable haptic devices for their bomb disposal units, while NASA astronauts are using the same technology in robots that make external spacecraft repairs, preventing the need for a much more hazardous and costly spacewalk.  

Touchable

Touchable haptic technology is being more widely used by consumers, whether or not they are aware of it. Most smartphone screens use haptic technology, replacing the home button with a virtual button and the fingerprint reader under the screen. Screens respond to user movements, like touches, taps or rotations.

A new field within touchable haptic technology is called haptography, the mimicry of object textures and movements. TanvasTouch is a pad with programmable textures that can be felt by users swiping their fingers across touchscreens, trackpads, and physical surfaces, mimicking clothing materials like wool and silk before buying the items.

Source: Tanvas Touch

Wearables

Wearable haptic systems create contact sensations, relying on tactile stimuli, such as pressure, vibration, or temperature, controlled by the nerves of the user’s skin.

Virtual Reality (VR) products are the most common application of wearable haptic technology available today. VR gloves are meant to mimic real-world impressions, and they receive input from the user who is controlling their virtual avatar. VR and AR can benefit greatly from the endless consumer engagement options that wearables and haptic tech can provide.  

Haptic Technology Uses

Haptic technologies offer numerous potential advantages. Here are several current and potential use cases for touch-based solutions that tap into the benefits of haptics and can produce a better user experience.

Product Design Applications

Haptic technology can improve the user experience by working through touch optimization.

Automotive infotainment systems will begin to incorporate more haptics into their features list. Touch screens will become responsive to the user, providing multiple driver personalized settings. Additional applications for autos include pedal feedback and steering enhancements that are needed given that drive-by-wire systems are becoming more common. These help drives avoid accidents or save on gas. 

Health and Wellness

The newest advances in wearable haptics provide great opportunities within the health-tech industry.  Real-time haptic devices gather biometric data and can adjust the experience to suit the user.

Better data collection and feedback allow enhanced user experiences and, more importantly, improved health outcomes. TouchPoints has a wearable system which the TouchPoints CEO reports can reduce stress by 74% in 30 seconds.  This is done with a vibrating pattern that interrupts anxiety and builds a restful state.

Source: TouchPoints

Other companies involved with posture correction, like ergonomic furniture makers, app creators, or chiropractors, can use haptic technology to improve their products and benefit their users.  

Industrial Training

With haptic feedback, training environments can simulate natural work environments and labor conditions more closely, improving training and overall accuracy. Users can partake in virtual training scenarios in a safe, offline environment while using haptics to get a lifelike experience. 

This virtual haptic process can allow for training in assembly line usage, maintenance, safety procedures, and machinery operation. A similar haptic feedback system can also be used with product testing and many other uses, allowing users to train without risk to themselves or company property.

Accessibility

Accessibility to products and services can be improved for the visually disabled. Haptic technologies allow users to create virtual objects, they can interact with products, and even approximate the appearance of an object through touch-based sensory input. A Stanford team has developed its 2.5D display for the visually impaired to accomplish visual tasks.  

Not only will these new haptic solutions create novel markets and aid those with accessibility restrictions, but they can help ensure a company stays compliant with access regulations.

Rehabilitation

Haptics has the potential to boost the speed and effectiveness of rehabilitation programs. A Dutch startup, SenseGlove, has created a glove that uses VR simulations and haptic training to aid with virtual recovery programs.

Source: SenseGlove

Their product allows someone suffering from nerve damage due to an accident, illness, or stroke to practice daily actions. Things like pouring a cup of hot tea or cutting a steak for dinner can be done in a safe digital environment.

Remote Tasks

With an internet connection, haptic controller, and connected robot, remote tasks will become easier and far less prone to error.

Industries lacking highly skilled specialists can connect via a virtual haptic environment, allowing subject matter experts to manipulate a robot from anywhere in the world or beyond.

Closing Thoughts

Haptic technologies have been around for decades. However, the sector has seen tremendous growth in the past few years. The APAC expects the world’s haptic technology market to grow at a compounded rate of 12% through 2026

Source: APAC

Haptics is no longer a video game gimmick. New advancements and applications are becoming more widely available. Businesses should explore implementing these technologies into their operations, marketing, and consumer experiences.

By embracing this innovative technology, companies can offer their users an enhanced experience that makes them feel connected to products, services, and the brand. Haptics enables us to feel much more connected, no matter how far the distance between us may be.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AR, MR, VR, and XR

When someone enters the immersive technology arena, one of the first questions they may ask is: what’s the difference between virtual reality (VR) and augmented reality (AR)? 

These are reasonably easy to distinguish, but additional terms are less standard, such as mixed reality (MR) and extended reality (XR). Because these terms are becoming more prevalent, they will be separate initially but later become joined aspects of the metaverse as we advance. 

We will go through all of these concepts to improve our understanding and provide a few examples.  

What is VR?

Virtual reality or VR is what most prominent tech companies are pushing as the metaverse. It is an entirely immersive alternative reality that is coming to the mass market. It can be experienced by wearing a VR headset such as “Meta Quest,” formally Oculus.  

Meta Quest 2, courtesy of GameStop

Wearing a VR headset is like having a large screen directly in front of you. It surrounds your vision, and you cannot see anything else. This setup means you are entirely immersed in the digital environment. For example, a user could be at home but transported to an entirely novel world by wearing the VR headset through its immersive audio and visual experience.  

An excellent example of a VR use case is where hundreds of people in a mall could join the European Rallycross Championship winner.

Virtual reality rally with the European Rallycross Champion, Reinis Nitiss, courtesy of Lattelecom

At the shopping center, people only sat in actual racing car seats mounted to the wall. Still, the virtual reality system allowed them to be in the car and ride along with the champion from Latvia on the rallycross track at full speed.  

While Oculus was the first widespread application of a VR headset, and while it was priced out of the range of most consumers, the best-known initial application was much more accessible to consumers. It was the Google Cardboard. These simple folding cardboard items are still available and allow their users to insert their mobile phones into the headset for use as the device’s headset. 

Samsung Gear was the next, more widely accessible, application of VR using a head mount that came with every Galaxy S6 flagship phone purchase.  

Courtesy of Samsung Gear

VR has broadened beyond these initial devices. With Meta’s (Facebook’s) purchase of Oculus and their intention to take over the metaverse space, the newer generation of devices is compelling and no longer the freebie novelty item they were. VR has several entertainment uses and is now becoming more familiar with gaming. 

However, VR can also add a lot of value to other applications, such as education, manufacturing, and medicine.

AR Against VR

The main idea behind AR is to add to the reality we are experiencing at any given time rather than completely overwriting our current surroundings and entering a new world.

While VR takes you away from everything around you, AR enhances the real-life environment by placing digital objects and adding audio to the environment through a handheld device. One of the best-known augmented reality applications that emerged in 2016 was Pokemon Go.

Courtesy of Informatics

A great use case for AR is the retail sector, providing customers with the benefits once solely the domain of in-store shopping. Through AR, a visual representation–a hologram–of an item, say a piece of clothing, can be overlaid on top of the current environment. 

AR can also be an excellent tool to help customers understand the spatial orientation of objects, such as placing furniture, appliances, or fixtures in their immediate location and seeing if it fits into the potential buyer’s kitchen or office. 

Other AR companies like Magic Leap are creating lightweight AR solutions and making the technology accessible. They have industrial solutions available from the Americas to Asia. Magic Leap has been working with companies like Cisco, SentiAR, NeuroSync, Heru, Tactile, PTC, and Brainlab to refine and improve their devices for communication, training, and remote assistance for use cases in industrial environments, clinical settings, retail stores, and defense.

Courtesy of Magic Leap

The commercial AR market is developing rapidly as well, becoming more accessible for consumers to view AR and create augmented reality content. For example, Overlee has Canvass Prints, augmented photos, albums, cards, or wedding invitations that can be viewed with AR to provide a video along with the photo. In addition, some wine has added AR to their experience.

Courtesy of LivingWineLabels

AR and VR Against MR

MR is similar to AR. It will not remove you from your current surroundings, but rather the tech reads your surroundings and adds digital objects into your environment. However, unlike most AR content, which can be retrieved using a mobile device, you will need a headset from Magic Leap to experience mixed reality fully.

Although the MR and AR use cases often overlap, mixed reality can provide more significant interaction with digital content in many cases. There is no need to hold a mobile device to keep the AR illusion going. However, this requirement makes MR less accessible to the mass market. For example, GSMA data shows 10.98 billion global mobile connections.

A similar number cannot be said of MR headset owners. They are pricey and still in their early stages. There will be an extended time for this to change, but there is enormous potential. Once the hardware and software improvements are mastered and acceptance more comprehensive, this could change quickly.  

Closing Thoughts

VR has a head start in the field, being more accessible and easier to implement than AR and MR. However, VR still needs to be established. The area has several growth opportunities, including body suits and treadmills.

Courtesy of Virtuix

Though VR does have a lead, the long-term prospects for other realities are equally good. The main difference between VR and AR is the interface. The current generation of VR is bulky and can lead to dizziness or eye strain while that is not true for AR and MR. In addition, they provide many use cases for marketing, art, education, and industrial applications.  

The current devices will become less intrusive, and though we use mobile devices now, items like Google Glass (but better designed) will become more common. The future will speak on the growing number of ergonomic devices for alternative realities instead of cell phones. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltec.io.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltec.io.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Next Generation DNA Sequencing

A sequence tells a scientist the genetic information carried on a particular DNA or RNA segment. For example, the sequence can be used to determine where genes are located on a DNA strand and where regulatory instructions turn those genes on or off. 

In the mid-90s, colleges started teaching their undergraduates about DNA sequencing, with DNA sample amplification tech as the new kid on the block. The Human Genome project was ongoing, and the first human sequence had yet to be completed. 

Twenty-five years later, DNA sequencing is done regularly for many and has helped dramatically with medical and forensic needs. We are now entering a whole new era of sequencing that is the “next generation.” Let’s look at this change and how this generation alters science and medicine.

What Is Next Generation DNA Sequencing?

Next generation DNA sequencing (NGS) started gaining notoriety in the early 2010s and is a term that describes DNA sequencing technologies that have revolutionized genomic research.  

Original DNA Sequencing

To understand NGS, we need to understand the original type of DNA sequencing. 

First, a DNA strand was copied to create enough material. Then one by one, the base pairs were determined using gels with capillaries that pulled them through using electricity, the chain-termination method, or as it is commonly known, Sanger sequencing. 

The Human Genome Project used Sanger sequencing, which multiple international teams utilized to decipher the human genome, taking 13 years and $3 billion to produce the final draft released in 2003. By 2008, using several NGS techniques, the famous discoverer of DNA, James Watson’s genome sequence, was provided to him on a hard drive at an estimated cost of $1 million. 

In 2011, Apple co-founder and billionaire Steve Jobs had his DNA sequence done to help in his cancer fight for $100,000. Using NGS, a lab can now sequence an entire human genome in only one day at the cost of $100 (Ultima Genomics).  

How NGS Works

NGS involves a stepwise process (four total steps) that breaks up the sample (DNA or RNA) and sequences the parts simultaneously to get faster results.

Source: Illumina

The process is generally as follows:

1. Sample preparation involves fragmenting DNA/RNA into multiple pieces (millions for the human genome) and then adding “adapters” to the ends of the DNA fragments.

2. Cluster generation is where the separated strands are copied millions of times to produce a larger sample. 

3. Sequencing the libraries: each of the strands is sequenced with unique fluorescent markers.

4. A genomic sequence is formed by reassembling the strands using data analysis techniques. 

In principle, the NGS concept is similar to capillary electrophoresis (gels used to sequence DNA in Sanger sequencing). The critical difference is that with NGS, because the fragments are broken up, the sequences of millions of fragments are obtained in a massively parallel fashion, improving accuracy and speed while reducing the cost of sequencing.  

NGS’ Impact

Compared to the conventional Sanger sequencing method’s capillary electrophoresis, NGS’ short-read massively parallel sequencing technique is a fundamentally different approach that revolutionizes our sequencing capabilities, launching the second generation of sequencing methods.

NGS allows for the sequencing of both DNA and RNA at a drastically cheaper cost  than Sanger sequencing, and it, therefore, has revolutionized the studies of genomics and molecular biology. 

NGS’ Advantages

Because NGS can analyze both DNA and RNA samples, it’s a popular tool for functional genomics. In addition, NGS has several advantages over microarray methods.

· A priori knowledge of the genome or of any genomic features is not a requirement.  

· NGS offers single nucleotide resolution, which detects related genes and features, genetic variations, and even single base pair differences. In short, it can spot slight differences in code between two samples. 

· NGS has a higher dynamic signal range, making it easier to read.

· NGS requires less DNA or RNA as an input (nanograms of material are sufficient). 

· NGS has higher reproducibility. Because of its other advantages, the chance of an error between repeated tests is reduced.  

Most Common NGS Technologies

Three sequencing methods are and were widely used that fall under the NGS umbrella:

· Roche 454 sequencing (discontinued in 2016). This method uses a pyrosequencing technique that detects a pyrophosphate release. It uses bioluminescence (a natural light signal). Broken-up DNA stands had unique markers attached.  

Source: bioz.com

· Illumina (Solexa) sequencing. The Illumina process simultaneously identifies DNA base pairs. This is done as each base emits a different and unique fluorescent signal, continuously added to the nucleic acid chain.

Source: bioz.com

· Ion Torrent (Proton/PGM) sequencing. This kind of sequencing measures the direct release of positive Hydrogen protons when incorporating individual base pairs. They are released when added by a DNA polymerase. The Ion Torrent method differs from the previous two methods because it is not using a light measurement to do the sequencing.  

Source: bioz.com

How Is NGS Being Used?

The advent of NGS has changed the biotechnology industry. There are now new questions that scientists can ask and get the answers to that were either cost-prohibitive or the samples needed were more significant than the available material. The main applications possible with NGS include:

· Rapidly sequencing the whole genome of any life form, from prions and RNA viruses to individual humans and other mammals. 

· Utilize RNA sequencing to discover novel RNA variants and splice sites.

· Quantify mRNAs for gene expression.

· Sequence cancer samples to study rare variants, specific tumor subtypes, and more.

· Identify novel pathogens (such as viruses in bats).

What Can NGS Do? 

Notable organizations, such as Illumina, 454 Life Sciences, Pacific Biosciences, and Oxford Technologies Nanopore, are working on getting prices down so nearly anyone can get sequencing done. For example, Ultima Genomics has claimed a cost of $100 for its sequencing. Now, companies are marketing benchtop sequencing platforms that will bring these advances to as many labs as possible.  

Source: Illumina

The Illumina NextSeq Sequencer (above) is a benchtop system that can do nearly any task except “Large Whole-Genome Sequencing.” However, there is a cost of $210,000-335,000.  

We expect NGS to become more efficient and affordable over time, and these cost reductions will revolutionize several genomics-related fields. Currently, all NGS approaches demand “library preparation” after the DNA fragmentation step, where adapters are attached to the ends of the various fragments. That is generally followed by a DNA amplification step to create a library that can be sequenced with the NGS device. 

As we know more about different DNA molecules, we can develop ways to fight disease through gene therapy or particular drugs. This knowledge will help change our way of thinking about medicine.  

Third Generation Sequencing

A new class of sequencing tech, called third-generation sequencing or TGS, is being developed. These technologies can sequence single DNA molecules without the amplification step, producing longer reads than NGS. 

Single-molecule sequencing was started in 2009 by Helicos Biosciences. Unfortunately, it was slow and expensive, and the company went out of business in 2012. Nonetheless, other companies saw the benefit and took over the third-gen space.  

Pacific Bioscience has its “Single-Molecule Sequencing in Real Time (SMRT),” and Oxford Nanopore has nanopore sequencing. Each can produce long reads of 15,000 bases from a single DNA or RNA molecule. This evolution means smaller genomes can be produced without the biases or errors inherent to amplification. 

Closing Thoughts

The DNA sequence is a simple format in which a broad range of biological marvels can be projected for high-value data collection. Over the past decade, NGS platforms have become widely available, with the costs of services lowering by orders of magnitude, much faster than Moore’s law, democratizing genomics, and putting the tech into the hands of more scientists. 

Third generation sequencing will require robust protocols and practical data approaches. The coming expanse of DNA sequencing will require a complete rethinking of experimental design. Still, it will accelerate biological and biomedical research, enabling the analysis of complex systems inexpensively and at scale. We can then fight and prevent genetic diseases before they become realized issues.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. Using this, you agree that the information does not constitute investment or financial instructions. Do research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltec.io

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltec.io

The views, thoughts, and opinions expressed in this text are solely the authors’ views, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

What Are Dynamic NFTs?

Non-fungible tokens, NFTs, are finally making their way into the mainstream after achieving widespread adoption among the Web3 community. Despite the recent boom and bust of crypto and the accompanying spotlights from media outlets, digital influencers, public figures, and professional athletes have continued to jump on the bandwagon of NFT collections. 

As a result, there remains an interest in NFTs as a prominent application of blockchain technology, which retains the speculative asset moniker. However, the first NFTs were simple: often 8-bit style pictures that could be considered novelties and may or may not “boom” in the future. 

Yet that was just the beginning of the NFT evolution which may change the broader financial markets as we approach 2023. Dynamic NFTs (dNFTs) are pushing the boundaries of the design space that NFTs address through their ability to adapt and change, responding to external data and events. 

This article gives a brief NFT overview and then explains how dNFTs can take the blockchain space to the next level by highlighting current and potential uses for dNFTs. 

NFTs in Brief

NFTs are unique digital assets held, managed, and exchanged on one or more blockchains. “Non-fungible” means that every NFT is differentiated from every other NFT, having a one-for-one token ID and unique contract address. From there, data, such as images, video, or other metadata, can be attached to the NFT, meaning it’s possible to own an NFT representing a unique digital object.  

The most common use case for an NFT has been digital art. An artist will mint a token representing a digital artwork, and a buyer can purchase the token giving them ownership. Once an NFT is minted, its token ID doesn’t change. In its most simple form, an NFT is a transferable token with a unique token ID. 

The metadata ascribed to the NFT, including the image, description, and much more, is 100% optional. As a result, this primary (static) NFT model can provide various benefits for digital artists worldwide. 

Before NFTs, digital artists could not stop or track the unauthorized distribution of their work because there was no method to distinguish the difference between digital files. Thus, no single authentic file could be “owned.” Now, digital creators can sell their art to fans and give them verifiable ownership.

Dynamic NFTs

Static NFTs are still the most common type of NFTs available and in circulation, used primarily for art projects and gaming collectibles, such as with NBA TopShot. But, beyond these uses, static NFTs provide a unique value proposition for digitizing real-world items like real estate deeds, patents, other intellectual property, and unique identifiers. 

However, the static NFT model is limited by its permanence. Once the metadata is attached to the token and minted on the blockchain, it cannot be changed. The data may require frequent updating, such as with real-world assets, progression-based video games, or blockchain-based fantasy sports leagues. 

A dNFT provides the best of both worlds, allowing the retention of a unique identifier while enabling an update to its metadata. In simple terms, a dNFT changes attributes based on external conditions.

dNFTs can be upgraded in several ways based on external conditions. The changes to a dynamic NFT are generally through metadata changes triggered by a linked smart contract. This is accomplished by encoding the automatic changes within the NFT’s smart contract, which instructs the underlying NFT on how and when the metadata should change.

Source: Chainlink

Other dynamic elements beyond metadata changes are possible. For example, dynamic NFTs can be automatically minted when certain conditions are met, such as when a player finds a hidden spot in an augmented-reality game. dNFTs also include “hidden traits,” which are manifested through user interactions rather than within the NFT’s metadata. dNFTs are wholly customizable. 

Use Cases of Dynamic NFTs

An NFT’s name is specified in its metadata. This is also where its traits are assigned, including any relevant file links. While its token ID provides a permanent identifier that verifies ownership, the metadata is the soul of the NFT. The metadata contains the elements that make the NFT useful.  

Artistic projects using NFTs often have a variety of traits, some rarer than others. These traits are placed within the NFT’s metadata and a link to a corresponding image or video. And with a dNFT, these traits can change based on external conditions. 

Progressive Gaming

This functionality benefits character progression, a core tenant of several blockchain game models. When a new player creates their playable, NFT-linked character, the character’s base-level statistics are reflected in the NFT’s metadata. However, as the player continues to level up, the metadata on their dNFT changes to reflect their progression, choices, and growing stats.  

Real-World Assets

A second use case for shifting metadata is the tokenization of real-world assets. For example, a dNFT reflecting a property reflects its age, maintenance history, sales history, market value, and so on. A static NFT could only take a single snapshot of the property at one point in time. 

Popular Examples of Today

Two prominent examples demonstrate to us the growing potential of dNFTs. 

Regenerative Resources’ Short Film dNFTs

Regenerative Resources Co (RRC) is focused on transforming degraded coastal land into highly productive seawater landscapes. RRC has announced that it will launch five short films in dNFT form, designed by prominent artists. 

The proceeds from the dNFTs will be used to grow 100 million mangroves within the space afforded by RRC’s current projects. 

Each dNFT will have a short film in its metadata, starting with a single frame of the film. Every time the dNFT is bought and resold, more frames of each movie will be added to the respective metadata. This addition will continue until the dNFT holder can view the short film. The metadata will also include the “producers,” or those who buy limited-edition posters.  

LaMelo Ball dNFTs

LaMelo Ball, a rising star of the NBA’s Charlotte Hornets, is one of the first professional athletes to create a pioneering dNFT linked to the Chainlink Sports Data Feeds oracle. According to Playground Studio, this dNFT is redefining player-fan relationships

Before his NBA award of 2021’s Rookie of the Year, fans minted 8,070 dNFTs of four different tiers. However, eight dNFTs recorded the player’s stats, including points, rebounds, and assists.

Holders receive special access to raffles and specific perks based on Ball’s season and lifetime performance. One of the premium eight NFTs, the “Gold Evolve,” came with a promise from the player that if he won the Rookie of the Year title, it would reflect a new image. When Ball won, the NFT image changed. 

Source: Opensea

These LaMelo Ball dNFTs are examples of how dNFTs can continuously change based on oracle-provided external data. With Ball’s dNFTs, the player’s stats are constantly updated on-chain, triggering updates, rewards, and more.

Closing Thoughts

NFTs are highly speculative assets, and dynamic NFTs have just started to appear. They’re more of a novelty for programmers and collectors, adding more functionality to the current generation of static NFTs containing mainly altered pictures or briefly shifting video.

However, dNFT’s underlying abilities have immense potential, especially when more oracles are added to blockchains, increasingly able to provide relevant and curated data. Furthermore, these oracles providing external data can effectively supercharge dNFTs as programmers learn to fuse changing data with NFTs. Mastering this foundation opens new doors for finance, insurance, real estate, gaming, investing, and more as we expand. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltec.io

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltec.io

Web3’s Infrastructure

After covering all things blockchain for a few years, we’ve seen how the move toward Web3 is much more than the “magic” of digital money that many think on when they discuss cryptocurrencies. Web3 has the potential to solve the significant issues that plague the web and our world, regarding privacy, self-autonomy, and economics.  

The infrastructure behind Web3 will be a service that helps Web3 apps and their underlying blockchains perform better with amplified capabilities, and which is going to be a foundational pillar of Web3. There are now over 1,000 blockchains. This requires massive infrastructure. And infrastructure makes or breaks new projects.   

Inside Web3

We have previously written about Web3 and the pathway to get there from Web 1.0 that began in the 1990s.   

Web3 websites continue to be hosted on traditional web servers. However, the users own and operate some parts of the project, unlike the corporate oligarchy inherent in Web2. Web3 websites directly connect to underlying blockchain networks to facilitate user ownership. Typical blockchains used for this purpose are Ethereum, Binance, Solana, and Fantom.  

Let’s work through an example using a decentralized finance (Defi) website on the Fantom blockchain, SpiritSwap.  

Source: SpiritSwap

The SpiritSwap web application is hosted on a traditional Web2 server, which is running on Amazon Web Services(AWS). However, when a user wants to interact with SpiritSwap, they need to have a browser extension wallet, such as MetaMask, or a Coinbase Wallet, which connects to and authenticates their use of the web application.  

These wallets can be thought of as a universal single sign-on tool. Rather than a user logging onto SpiritSwap with their username and password under SpiritSwap’s control, the wallet itself logs in. The wallet also contains all the user’s digital assets (cryptocurrencies and NFTs) while simultaneously acting as the digital identity, represented by a user’s hexadecimal address that starts with “0x.” 

Once the wallet is connected, the user can exchange digital assets like a trader on the NYSE floor. 

Behind the scenes (on the backend), the user’s wallet is connecting directly with an additional server running the blockchain’s application, or a node. This stores data about the blockchain and communicates with the other blockchain network nodes, including the validators that create blocks.  

These application nodes use the same amount of electricity as a typical Web2 server. However, there is a need to access two servers: one running the web application and the other running the blockchain.  

At this point, digital infrastructure providers become essential. They must devise efficient and innovative server solutions. 

Web3 Demands Strong Infrastructure

Physical Servers

Although Web3 requires access to many servers, the Web3 movement is opposed to using the public cloud due to centralization concerns. 

Various Web3 projects, such as Solana, have been renting and buying several thousand “bare metal” (physical) servers from a variety of players. The leasing of these servers attracted the attention of Equinix Metal, who hosted “Uncensored,” the Infrastructure Blockchain event, to promote best practices in this growing space.  

Ankr’s Remote Procedure Call (RPC) service has served over 700 million monthly requests from Argentinian users, with similar numbers from Vietnam and Argentina. An RPC occurs when a computer program executes a procedure in a different address space, such as one a different computer in a shared network.   

Hetzner has a competitive infrastructure hardware product available for German and Finnish clients through its AX101 and AX161 configurations. Unfortunately, most bare metal servers stocked by providers do not match the ideal specs needed for Web3. 

Lower Redundancy

As peer-to-peer networks, blockchains are decentralized and distributed by their nature. This means that redundancy (backups) exists seamlessly within the network. If some physical hardware fails or a network outage happens, the blockchain itself remains virtually unharmed. 

In a traditional enterprise environment, it’s not uncommon to have multiple power supplies with layers of hardware to ensure network redundancy.  

Greater Disk Speed, Size, and Storage

We can imagine a blockchain like a growing stack of connected Lego bricks. The first brick is the “genesis block.” The stack will constantly grow from one side, and each block contains the group of transactions that form the distributed ledger. This is a huge amount of data that usually runs in LevelDB (an open-source NoSQL database), and it grows larger with every epoch (brief span of blockchain time).   

Unfortunately, Ankr demonstrated that most network-attached storage options and virtualization technologies were insufficient to keep up with the needs.  

This deficit means that most bare metal configurations using regular solid-state drives with less than 4TB of storage will not be sufficient for a high-traffic Web3 workload.

According to Ankr, 4TB of NVMe (non-volatile memory) solid-state storage is a minimum requirement. However, 8TB of NVMe per server for RPC nodes is preferable. In the case of archive nodes, which store entire copies of blockchains, between 12 and 30TB of NVMe per physical server is needed. Yet for some chains, even more is required.  

Web3’s Node Types

RPC Full NodeArchive NodeValidator Node
The most common node type.

Used by developers/projects to connect and interact with a blockchain.

Every use case for Web3/Defi/Metaverse needs access to other RPC full nodes.
Used by market research/analytics apps to track a blockchain’s activity.

Requires a lot of fast storage, starting with 12TB of NVMe.
Validators create the next block. 

For this, they receive crypto rewards from the network.In proof-of-stake (PoS) blockchains like Ethereum 2, Binance’s Smart Chain, and Solana, validators replace miners (i.e., Bitcoin).

Uses an enterprise-grade bare metal server or virtual server.

Low Latency and Speed Are Critical

For most Ethereum-based chains, a typical RPC full node uses about 50mbps of bandwidth. This usage means that 30TB of data transfer per month and per server is sufficient.

In the last year, the Argentine peso fell 36% in value against the US dollar. 

This has resulted in a switch from the peso to other currencies (many of them cryptos), and therefore 700 million RPC requests.  

As DeFi supplements or even replaces traditional finance, connections to proximity nodes, and low latency (low delay) connections become critical parts of financial infrastructure. Blockchain gaming applications that adopt NFTs for in-game purchases and other transactions demand low latency for their applications. 

Closing Thoughts

Web3 is the promising frontier of this decade. To be successful, digital infrastructure providers must offer new bare metal configurations which are quick and able to hold massive amounts of data while maintaining low latency. 

Web3 demands a new breed of digital infrastructure providers maximizing the utility of bare metal configurations for faster, larger, and more efficient data processing. Web2 shall likely and finally yield to Web3, but only after the infrastructure is built, and that depends on the forward-looking innovators amongst us. 

Disclaimer: The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Brain-Computer Interfaces and the Metaverse

What are the commercial promises of brain-computer interfaces, and how will they further connect us to the promises of the metaverse? These interfaces, initially sensory (on the scalp or skin), and possibly through brain implants in the future, could be the eventual platforms transforming all parts of our diverse societies. 

The Brain-Computer Merge

You may not have noticed, but with each passing day, we are slowly merging more and more with the technology around us. Our smartphones are our tools for instant communication and the answers to many of our questions, allowing us to focus on other things rather than that which occupied our minds in the past. 

We have implanted pacemakers and defibrillators that tell the cardiologist all about our hearts and correct our irregularities. We have implanted lenses in our eyes to fix vision issues. The technology around us now, especially with our smartphones, will not represent the most common interface in our future. 

What our smartphones do, and much more will likely be incorporated into our bodies. Though google glass was not a successful project, many of its users were the wrong targets, and it was also burdened with tech glitches and security concerns. It did, however, show that we could bring technology closer, supplying useful information and sending sound directly into the ear with bone conduction. 

Source: The Verge

As brain-computer interface (BCI) systems progress, they will be an essential step forward in the brain-computer merge. A BCI’s role is the interpretation of the user’s neural activity. A BCI is just part of an environment that is more wired, has more sensors, and is digitally connected.   

With the current generation of experimental brain-computer interfaces, using only their minds, humans can play video games, articulate prosthetic limbs, control their own limbs, work wheelchairs, and more. BCIs have the potential to communicate with patients that suffer from Alzheimer’s disease, head injuries, and stroke, allowing them to control computers that help them speak.  

BCI technology will likely take a turn for enhancing sensory connection and communication. The most common use for BCI technology is the directional control of a computer cursor. Imagine moving your mouse and clicking without the need for the mouse. 

This is already being done only with electrophysiological signals (brain and blood signals to a system of sensors). This BCI control system has already been utilized by users (both humans and animals) to control the external world without the need for conventional neuromuscular pathways (speech).  

Brain-Computer Interfaces Alongside the Metaverse

The metaverse is a fusing of the real and digital worlds. It’s either an entirely simulated digital environment, as is the case of virtual reality (VR), or an overlay of a digital experience to the real world with augmented reality (AR). 

Thought of in a different way, the metaverse can be a platform where users can feel the real through an animated or digital world encounter. The metaverse that combines augmented reality with the real world can give us more immersive, next-level platforms. The metaverse is intended to make our lives more natural and “realistic,” including socializing, work, and entertainment.  

Scientists, researchers, corporations, and entrepreneurs are making strides with their new and advanced applications. Many of these applications are intended to augment human abilities, fulfilling desires to be stronger, smarter, and better looking. 

Exoskeleton by SuitX

With the BCI connection, it’s believed that part of this initiative will transform technology, medicine, society, and the future. Current devices can cultivate human abilities that exceed the former standards and are not dissimilar to the great powers of Iron Man. SuitX’s Exoskeleton can reduce lower back loads by 60%.  

As these technologies continue to merge with BCIs, it’s believed that the opportunity to augment human capability will be even greater.  

Elon Musk’s Neuralink has been working on a consumer-intended high-bandwidth BCI that focuses on four parts of the brain. 

Source: Neuralink

Neuralink has shared their video of a macaque playing “MindPong” by way of chips embedded in a few regions of its brain. The primate was trained to play the game by simply thinking about moving its hands. The goal is for future “Neuralinks” to tie the brain to the body’s motor and sensory cortices, thereby enabling people with paraplegia the ability to walk again. 

Inside a Metaverse

Technical training inside a metaverse consists of providing technicians with advanced features and simulations capable of operating 3D representations of complex systems, instruments, or machinery. 

BCIs with simulation technology will combine to empower the metaverse, allowing remote support and maintenance of devices and equipment. This could be a matter of connecting with experts who would control the repair of the system by thinking about moving their own hands to make repairs. 

This would allow for the “switching on” of virtual reality engineers and technicians when an unforeseen repair occurs. It’s not so far of a step beyond this to think of the same procedure for doctors and surgeons.

Dating and socializing in virtual reality may become a common occurrence with virtual movies and museum tours. Such interactions could be enhanced with the direct brain interface that enriches the mind of our partners, adding to positive experiences from the external environment (“I wish you could see things from my point of view” would be possible).  

Closing Thoughts

Applications of brain-computer interfaces are spread across many fields and are not limited to military or medical purposes. The fullest realization of these technologies will certainly take time and incremental improvements, but they will be well-suited for the metaverse. 

This process will require significant testing and a long period of adoption. However, brain interfaces can be game changers in their lives and incredible experiences for many.  

We could eventually see a future that no longer has brain-computer interfaces but goes toward the next step of direct brain-to-brain connections. This new type of connection is a very exciting step that would bring humans closer together, allowing us to understand how we all experience the real and virtual worlds.  

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

The Convergence of Technology and Healthcare

We saw the changes to our lives with the Covid-19 pandemic playing the role of catalyst for changes in life sciences and healthcare. This article will discuss how new technologies, including blockchain, cybersecurity, and the needed talent behind these, are impacting the medical sector.

Recent Changes to Healthcare

We have seen how the past few years have been shaped by the Covid-19 pandemic, which disrupted and revolutionized nearly every sector of our economy. 

When we look at monetary investment, it’s evident that technology spending is focused on healthcare. A report from Bain and Co found that even with economic uncertainty, healthcare is still planning to invest in tech, with software being a top five strategic priority for 80% of providers and a top three for 40%. 

This spending is for several reasons: efficiency, cost reduction, and telemedicine, whether by phone or video. Heavy technology investment in the era of Covid-19 caused healthcare to leapfrog into patients’ homes. 

These changes will be the driver of healthcare’s growth for the next few years. Yet we need to have a strong understanding of how the consumer fits into this system of delivering service, what their preferences are, and the new habits they are forming.

Once Before, in the 1920s

Periods of economic and geopolitical uncertainty have led to healthcare advancements. 

In the 1920s, there were many geopolitical tensions that eventually led to wars, but throughout the decade and the rest of the 20th century, there were remarkable advances in medicine. 

The construction of hospitals that followed the passing of the Hill-Burton Act in 1946 made the foundation of our current health delivery system, the same way we saw our highway system and other infrastructure change the face of America and its economy. We’ll likely see a similar change around needed vaccines and other due innovations. 

Rather than creating roads, bridges, and buildings, we’ll see digital infrastructure. Out of the discovery of the first mRNA Covid vaccines, we’ll find many ways to accelerate the process through biotechnology and innovation. Technology is an added dimension to healthcare innovation that has appeared out of the Covid turmoil. When technology is added to the mix, we’re going to see some fantastic opportunities.  

The Covid Cause

It’s remarkable to think that a significant, globally impacting event is a catalyst that accelerates healthcare sector tech investment. If the necessary Covid closures were only for a single week, many of these changes would not have resulted. 

Doctor visits would have been pushed back for that week instead of finding a remote solution that was needed to provide the required services and the resulting changed behaviors they have brought. The R&D plans that are now part of biotech and medical companies would likely not have manifested. 

But we see that necessity is the mother of innovation, and because of Covid-19, these changes are incorporated and permanent. Many experts believe that the two years of Covid moved the industry ahead 5 to 10 years.

A Move Toward NFTs in Healthcare

Non-Fungible Tokens (NFTs) have been an investment darling in the art world but have yet to gain prominence much outside that and the collecting arenas. This lack of diversified uses is starting to change. Healthcare is up next. 

NFTs are an exciting area for healthcare services. It’s easy to imagine a world where an NFT can become a patient’s profile in healthcare. An NFT profile has the capability to carry personal information such as the entire genome and all medical history and payment information as a unique footprint.

An NFT can also provide the owner with a pathway to get them into the healthcare system and provide them with services. This information can be combined with the banking system making their help more viable. Imagine a health saving account tied directly to the NFT through an oracle (a third-party gateway).  

This will be able to allow someone to fund their health savings account through their W2-qualifying job. Charges that fit under the account can be automatically withdrawn. 

This kind of payment system is just starting to happen on the municipal level. Cities like New York and Miami have begun to move toward such a system, with Philadelphia and Dearborn, Michigan, signaling similar moves. It’s not far-fetched to imagine a similar action to healthcare payments. 

Cybersecurity in Healthcare

When there is human involvement, there is the potential for security vulnerabilities. The second issue that all companies are dealing with is finding the right talent that is capable of building systems and products able to protect company and personal data. There is an ongoing global shortage of nearly 3.5 million cybersecurity professionals across all industries, with 700,000 unfilled cybersecurity jobs in the US.  

Cybersecurity for healthcare also requires the development of technicians that can play defense, quickly responding to cyberattacks in real-time. Hacking is accelerating and is a top risk profile for many companies, not just in tech. 

Interestingly, one of hacking’s growing tools, AI, may also be its best solution as more information and services are digitized. Significant investment is happening in software projects that help protect and defend all data. In November 2022, Crunchbase showed 258 privacy startups that have raised over $4.3 billion, with $800 million of this total raised in the last year.  

Life sciences and healthcare are industries that drive policies and security. Many boards and audit committees in the healthcare and life science sectors are attempting to identify various cyber risks and vulnerabilities. It’s fully expected that the demand for cyber-fluent personnel will increase dramatically. 

Permanent Changes Coming to Healthcare

Tech is now taking over in several areas, including consumer electronics. Wearables and connected devices are becoming a more common source of medical information. Alivecor’s KardiaMobile device is a 6-lead EKG that can send information via smartphone directly to the patient’s cardiologist for review.  

Source: Alivecor

The Las Vegas consumer electronics show is filled with sensors, apps, and embedded personalization. This expansion of devices for our health will only increase as the 5G networks expand their reach across the United States. The impacts will be wide-ranging, but ultimately focus on enhancing our lives through tech. 

One crucial, long-term benefit is that we are now seeing the healthcare economy moving from a sickness focus to a wellness mindset. This change is easier to accomplish with technology as we can monitor our health and see when things change.  

Upcoming Healthcare Trends

The healthcare sector will first see a move toward modernization in human resources, finance, and procurement through cloud services. Moving all legacy enterprise systems to the cloud will take nearly ten years. 

Next, innovation must tackle the back office to front office connection, including consumer-level devices. We have been discussing healthcare costs for decades, and the tech is now available to make it more efficient. This change can drive out costs and potentially deliver care to all.  

Closing Thoughts

Technology in healthcare has been accelerated by Covid-19, pushing digital health access, and drug and vaccine innovation. These trends are altering research and development pathways for healthcare. 

NFTs have begun to enter the healthcare space and, in the future, will likely be a secure way to provide needed information to providers, including genome and medical history. Cybersecurity issues will come to the forefront in healthcare tech with more need for talent and solutions to keep users’ data secure. 

This need for talent will include the opportunity for tech to provide equitable solutions that lower costs and bring healthcare to all. A process of modernization that puts enterprise services on the cloud will be the biggest change we will see. Further, it will promote a focus of wellness over sickness as consumer devices become ubiquitous. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

design and development by covio.fr