What is Somnium Space?

Somnium Space started in 2017 and is by area one of the largest virtual blockchain worlds (VBWs). Like with other VBWs, on Somnium Space, users can create fully customizable environments and programmable independent VR experiences within its larger connected world. These environments are possible through Somnium’s four key offerings:

  1. An SDK to create avatars and property
  2. An NFT marketplace where game-based assets can be traded
  3. A module for building environments and structures within them
  4. Virtual reality experiences

Somnium Space allows creators to build and monetize VR experiences for their users that are from their own imaginations while also integrating blockchain technology. This quality means that the creators are the designers and main recipients of value. Let’s take a deeper look into this second-largest take on the metaverse.  

Courtesy of Somnium Space

Somnium Space Basics

Somnium Space is a VBW built on the Ethereum blockchain. Somnium is an open-source platform with an immersive VR world that allows users to buy digital real estate, including land, homes, buildings, and several other in-game assets that have value. Somnium’s immersive dynamics allow its players to build and monetize their environments or visit other users’ creations like swimming pools, museums, restaurants, or nightlife and casinos. The possibilities for building within Somnium are nearly limitless, allowing for the construction of unique experiences, worlds, and assets. 

While traditional multiplayer VR games have their users divided into mirrored instance rooms via sub-servers, Somnium hosts all the players in a vast interconnected world. Within its broader VR universe, users can create Somnium environments, customized and programmable independent VR experiences. 

What’s more, the NFT assets from within Somnium are compatible with other metaverses and platforms throughout the Ethereum blockchain (and potentially other blockchains) ecosystem.  

Somnium has its four main elements that are listed in the introduction, and it has deeply incorporated NFTs into its technology, allowing players to bring NFTs from outside its universe (from other parts of the decentralized ecosystem) inside. 

Somnium’s Tokenomics

With traditional gaming, the users generate value, which goes to the developer. Players will purchase the game, or with freemium games, they will buy upgrades, access, and customizations a la carte. 

They can’t generally take in-game assets out of the game. For example, if the player buys upgraded armor, unlocks a new vehicle, or gains access to a new world, that value remains in-game only. You cannot take the armor or vehicle to another game nor unlock the asset’s value for use in another platform.

However, in Somnium and other blockchain-based games and metaverses, the opposite is true, and assets are valued with tokenization, increasing the benefit to their owner. Being an Ethereum application, Somnium allows for tokenizing in-game assets such as real estate, avatars, wearables, and collectibles, decoupling those assets from Somnium, the company. Allowing for player-generated value allows players to access the token value created in Somnium elsewhere in the broader crypto and token economy.  

The Somnium economy is based on three token assets:

Somnium’s Cube Token (CUBE)

The CUBE is an ERC-20 (Ethereum) token that works as Somnium’s native utility token. The CUBE streamlines in-game player transactions and is most similar to tokens bought at an arcade. With an Ethereum wallet, players can hold ETH, CUBE, and NFTs (in ERC-721 form). 

CUBE is the bridge between assets for in-game commerce. As Somnium’s universe expands, CUBE will develop in-world utility, allowing players to exist in their VR world. 

CUBE’s price, courtesy of Coinmarketcap.com

Somnium’s Land Parcels (PARCELs)

Somnium Space had two “Initial Land Offerings” (ILOs) to issue PARCELs to stakeholders via the OpenSea NFT marketplace. Players who want to build their own Somnium worlds must obtain at least one land PARCEL. Players can also put any NFT on their PARCEL and explore the PARCEL in VR. 

Somnium map, courtesy of Somnium Space

Somnium’s Avatars

At the end of 2020, the Somnium team expanded the CUBE’s utility with AVATAR tokenization. Players can mint full-body VR avatars onto the blockchain via CUBE. Players purchase an AVATAR with CUBE, and it’s part of their inventory. AVATARs are compatible with other virtual worlds across many digital platforms.  

CUBE tokens can be used to purchase another player’s avatar in NFT form. The buyer’s CUBE is exchanged for the NFT AVATAR of the seller. The ability to create avatars within Somnium exemplifies CUBE’s growing utility.  

Somnium’s Karma Levels

The Karma level indicates how VR citizens perceive each other. Somnium will calculate the Karma level of a player with three main metrics:

  1. Rating: how other virtual citizens perceive them based on on-platform interactions. 
  2. Engagement: each player’s economic activity value, referring to a score including their time spent gaming, land ownership, and world discovery rate. 
  3. Other factors: these include building, public participation, and event organizing.

Players will earn CUBE based on their Karma level, and those that act as instructors or guilds, providing value to the community, will too.  

User Opportunities

Somnium offers its Software Development Kit (SDK), Unity, to create customization and personalization for the development of property and avatars, with the avatars interoperable with other platforms and virtual worlds. 

The SDK includes a builder mode so that complex and intricate structures can be designed. Once developed, these can be listed as assets on the NFT marketplace and become part of the metaverse.  

Builder mode, courtesy of Somnium Space

Somnium is now interoperable with Polygon so users can transfer their NFTs in and out of Somnium, saving on fees. These NFTs can be any of the following:

  • Cars or other vehicles
  • Unique avatar wearables 
  • Event tickets for entry to a parcel
  • Teleportation hubs to travel across the metaverse
  • Treasure hunts leading to CUBEs

There will be a maximum of 100 million CUBE tokens minted, limiting the availability and generating value for holders. The fees charged by Somnium are minimal, making it easier to gain from a democratized metaverse economy.  

Closing Thoughts

VR platforms, such as VRChat, AltSpace, and Rumii, are popular platforms for distanced social interaction and corporate meetings. Concurrently, Ethereum-based blockchain metaverses like Somnium Space have built multiplayer ecosystems, unlocking value in a novel way. The true idea of the metaverse is an entirely decentralized world where we interact using blockchain technology.  

By integrating blockchain, Somnium users can create experiences from their imaginations and monetize these VR experiences in a way other platforms do not allow. While the play may be virtual in Somnium’s version of the metaverse, it has created a real economy that moves beyond the space that Somnium inhabits and potentially blurs the lines into the augmented and real worlds.  

Somnium could be a hit if it is able to attract the right users that will create exciting experiences that others will be enticed to partake in and, more importantly, pay for. Its potential success is hard to determine. It relies on users for content creation, which is a dangerous proposition, and while it allows creators to gain, it’s always taking its cut.

Somnium has yet to gain a significant following, even though by digital area, it has the second largest metaverse environment, behind Decentraland. At the end of 2021, Decentraland hosted 300,000 monthly users, while in the same period YouTube had 2.6 billion monthly users. Immersive and original content is vital to Somnium’s success. Let’s see what the inevitable future of VR brings. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AI’s Transformation of Oncology

Artificial intelligence (AI) is constantly reshaping our lives. It saves companies and us time and money, but it has applications that can be applied to medicine, potentially saving our lives. 

We can understand AI’s evolution and achievements to model future developmental strategies. One of AI’s most significant medical impacts is already being seen in and will continue in oncology. 

AI has opened essential opportunities for cancer patient management and is being applied to aid in the fight against cancer on several fronts. We will look into these and see where AI can best aid doctors and patients in the future. 

Where Did AI Come From?

Alen Turing first conceived the idea of computers mimicking critical thinking and intelligent behavior in 1950, and by 1956 John McCarthy came up with the term Artificial Intelligence (AI). 

AI started as a simple set of “if A then B” computing rules but has advanced dramatically in the years since, comprising complex multi-faceted algorithms modeled after and performing similar functions to the human brain.

AI and Oncology

AI has now taken hold in so many aspects of our lives that we often do not even realize it. Yet, it remains an emerging and evolving model that benefits different scientific fields, including a pathway of aid to those who manage cancer patients.  

AI has a specific task that it excels at. It is especially good at recognizing patterns and interactions after being given sufficient training samples. It takes the training data to develop a representative model and uses that model to process and aid decision-making in a specific field

When applied to precision oncology, AI can reshape the existing processes. It can integrate a large amount of data obtained by multi-omics analysis. This integration is possible because of advances in high-performance computing and several novel deep-learning strategies. 

Notably, applications of AI are constantly expanding in cancer screening and detection, diagnosis, and classification. AI is also aiding in the characterization of cancer genomics and the analysis of the tumor microenvironment, as well as the assessment of biomarkers for prognostic and predictive purposes. AI has also been applied to follow-up care strategies and drug discovery.  

Machine Learning and Deep Learning

To better understand the current and future roles of AI, two essential terms fall under the AI umbrella that must be clearly defined: machine learning and deep learning.

Machine Learning

Machine learning is a general concept that indicates the ability of a machine (a computer) to learn and therefore improve patterns and models of analysis.  

Deep Learning

On the other hand, deep learning is a machine learning method that utilizes algorithmic systems that mimic a system of biological neurons called deep networks. When finalized, these deep networks have high predictive performance.  

Both machine and deep learning are central to the AI management of cancer patients.  

Current Applications of AI in Oncology

To understand the roles and potential of AI in managing cancer patients and show where the future uses of AI can lead, here are some of the current applications of AI in oncology.  

With the below charts, “a” refers to oncology and related fields and “b” to types of cancers for diagnosis. +

Courtesy of the British Journal of Cancer; a. oncology and related fields: cancer radiology 54.9%, pathology 19.7%, radiation oncology 8.5%, gastroenterology 8.5%, clinical oncology 7.0%, and gynecology 1.4% b. tumor types: general cancers 33.8%, breast cancer 31.0%, lung cancer 8.5%, prostate cancer 8.5%, colorectal cancer 7.0%, and brain tumors 2.8%, others: 6 tumor types, 1.4% each.

The above graph, from the British Journal of Cancer, summarizes all FDA-approved artificial intelligence-based devices for oncology and related specialties. The research found that 71 devices have been approved. 

As we can see, most of these are for cancer radiology, which makes us correctly assume that it is for detecting cancer through various radiological scans. According to the researchers, of the approved devices, the vast majority (>80%) are related to the complicated area of cancer diagnostics.

Courtesy of cancer.gov

The image above shows a deep learning algorithm trained to analyze MRI images and predict the presence of an IDH1 gene mutation in brain tumors.

Concerning different tumor types that AI-enhanced devices can investigate, most devices are being applied to a broad spectrum of solid malignancies defined as cancer in general (33.8%). However, the specific tumor that counts for the most significant number of AI devices is breast cancer (31.0%), followed by lung and prostate cancer (both 8.5%), colorectal cancer (7.0%), brain tumors (2.8%) and six other types (1.4% each). 

Moving Forward with AI

From its origin, AI has shown its capabilities in nearly all scientific branches and continues to possess impressive future growth potential in oncology.  

The devices that have already been approved are not conceived as a substitution for classical oncological analysis and diagnosis but as an integrative tool for exceptional cases and improving the management of cancer patients. 

A cancer diagnosis has classically represented a starting point from which appropriate therapeutic and disease management approaches are designed. AI-based diagnosis is a step forward and will continue to be an essential focus in ongoing and future development. However, it will likely be expanded to other vital areas, such as drug discovery, drug delivery, therapy administration, and treatment follow-up strategies.

Current cancer types with a specific AI focus (breast, lung, and prostate cancer) are all high in incidence. This focus means that other tumor types have the opportunity for AI diagnosis and treatment improvements, including rare cancers that still lack standardized approaches. 

However, rare cancers will take longer to create large and reliable data sets. When grouped, rare cancers are one of the essential categories in precision oncology, and this group will become a growing focus for AI.  

With the positive results that have already been seen with AI in oncology, AI should be allowed to expand its reach and provide warranted solutions to cancer-related questions that it has the potential to resolve. If given this opportunity, AI could be harnessed to become the next step in a cancer treatment revolution.  

Closing Thoughts

Artificial intelligence (AI) is reshaping many fields, including medicine and the entire landscape of oncology. AI brings to oncology several new opportunities for improving the management of cancer patients. 

It has already proven its abilities in diagnosis, as seen by the number of devices in practice and approved by the FDA. The focus of AI has been on the cancers with the highest incidence, but rare cancers amount to a massive avenue of potential when grouped.  

The next stage will be to create multidisciplinary platforms that use AI to fight all cancers, including rare tumors. We are at the beginning of the oncology AI revolution. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Brain-Computer Interfaces

Brain-computer interfaces are devices that allow people to control machines with their thoughts. This technology has been the stuff of science fiction and even children’s games for years. 

Mindflex game by Mattell

On the more advanced level, brain-computer technology remains highly experimental but has vast possibilities. First to mind (no pun intended), would be to aid those with paralysis in creating electrical impulses that would let them regain control of their limbs. Second, the military would like to see its service members operating drones or missiles hands-free on the battlefield.  

There are also concerns raised when a direct connection is made between a machine and the brain. For example, such a connection could give users an unfair advantage, enhancing their physical or cognitive abilities. It also means hackers could steal data related to the user’s brain signals.  

With this article, we explore several opportunities and issues that are related to brain-computer interfaces.  

Why Do Brain-Computer Interfaces Matter?

Brain-computer interfaces allow their users to control machines with their thoughts. Such interfaces can aid people with disabilities, and they can enhance the interactions we have with computers. The current iterations of brain-computer interfaces are primarily experimental, but commercial applications are just beginning to appear. Questions about ethics, security, and equity remain to be addressed. 

What Are Brain-Computer Interfaces? 

A brain-computer interface enables the user to control an external device by way of their brain signals.  A current use of a BCI that has been under development is one that would allow patients with paralysis to spell words on a computer screen

Additional use cases include: a spinal cord injury patient regaining control of their upper body limbs, a BCI-controlled wheelchair, or a noninvasive BCI that would control robotic limbs and provide haptic feedback with touch sensations. All of this would allow patients to regain autonomy and independence.

Courtesy of Atom Touch

Beyond the use of BCIs for the disabled, the possibilities for BCIs that augment typical human capabilities are abundant. 

Neurable has taken a different route and has created headphones that can make you more focused, not requiring a user’s touch to control, but can work with a wink or nod and will be combined with VR for a better experience.

Courtesy of Neurable

How do BCIs Work?

Training

Generally, a new BCI user will go through an iterative training process. The user learns how to produce signals that the BCI will recognize, and then the BCI will take those signals and translate them for use by way of a machine learning algorithm. Machine learning is useful for correctly interpreting the user’s signals, as it can also be trained to provide better results for that user over time. 

Connection

BCIs will generally connect to the brain in two ways: through wearable or implanted devices. 

Implanted BCIs are often surgically attached directly to brain tissue, but Synchron has developed a catheter-delivered implant that taps into blood vessels in the chest to capture brain signals. The implants are more suitable for those with severe neuromuscular disorders and physical injuries where the cost-benefit is more favorable. 

A person with paralysis could regain precise control of a limb by using an implanted BCI device attached to specific neurons; any increase in function would be beneficial, but the more accurate, the better.  Implanted BCIs can measure signals directly from the brain, reducing interference from other body tissues. However, most implants will pose other risks, primarily surgical-related like infection and rejection. Some implanted devices can reduce these risks by placing the electrodes on the brain’s surface using a method called electrocorticography or ECoG.  

Courtesy of the Journal of Neurosurgery

Wearable BCIs, on the other hand, generally require a cap containing conductors which measure brain activity detectible on the scalp. The current generation of wearable BCIs is more limited, such as only for augmented and virtual reality, gaming, or controlling an industrial robot. 

Most wearable BCIs are using electroencephalography (EEG) with electrodes contacting the scalp to measure the brain’s electrical activity. A more recent and emerging wearable method incorporates functional near-infrared spectroscopy (fNIRS), where near-infrared light is shined through the skull to measure blood flow which, when interpreted, can indicate information like the user’s intentions. 

To enhance their usefulness, researchers are developing BCIs that utilize portable methods for data collection, including wireless EEGs. These advancements allow users to move freely. 

The History of BCIs

Most BCIs are still considered experimental. Researchers began testing wearable BCI tech in the early 1970s, and the first human-implanted BCI was Dobelle’s first prototype, implanted into “Jerry,” a man blinded in adulthood, in 1978. A BCI with 68 electrodes was implanted into Jerry’s visual cortex. The device succeeded in producing phosphenes, the sensation of “seeing” light.  

In the 21st century, BCI research increased significantly, with the publication of thousands of research papers. Among that was Tetraplegic Matt Nagle, who became the first person to control an artificial hand using a BCI in 2005. Nagle was part of Cyberkinetics Neurotechnology’s first nine-month human trial of their BrainGate chipimplant.  

Even with the advances, it is estimated that fewer than 40 people worldwide have implanted BCIs, and all of them are considered experimental. The market is still limited, and projections are that the total market will only reach $5.5 million by 2030. Two significant obstacles to BCI development are that each user generates their own brain signals and those signals are difficult to measure.  

The majority of BCI research has historically focused on biomedical applications, helping those with disabilities from injury, neurological disorder, or stroke. The first BCI device to receive Food and Drug Administration authorization was granted in April 2021. The device (IpsiHand) uses a wireless EEG headset to help stroke patients regain arm and hand control.  

Concerns With BCI

Legal and security implications of BCIs are the most common concerns held by BCI researchers. Because of the prevalence of cyberattacks already, there is an understandable concern of hacking or malware that could be used to intercept or alter brain signal data stored on a device like a smartphone.

The US Department of Commerce (DoC) is reviewing the security implications of exporting BCI technology. The concern is that foreign adversaries could gain an intelligence or military advantage. The DoC’s decision will affect how BCI technology is used and shared abroad.

Social and Ethical Concerns

Those in the field have also considered BCI’s social and ethical implications. The costs for wearable BCIs can range from hundreds even up to thousands of dollars, and this price would likely mean unequal access. 

Implanted BCIs cost much more. The training process for some types of BCIs is significant and could be a burden on users. It has been suggested that if the translations of BCI signals for speech are inaccurate, then great harm could result. 

The Opportunities of BCIs

The main opportunities that BCIs will initially provide are to help those paralyzed by injury or disorders to regain control of their bodies and communicate. This is already seen in the current research, but in the long term, this is only a steppingstone. 

The augmentation of human capability, be it on the battlefield, in aerospace, or in day-to-day life, is the longer-term goal. BCI robots could also aid humans with hazardous tasks or hazardous environments, such as radioactive materials, underground mining, or explosives removal.  

Finally, the field of brain research can be enhanced with a greater number of BCIs in use. Understanding the brain will be easier with more data, and researchers have even used a BCI to detect the emotions of people in minimally conscious or vegetative states.  

Closing Thoughts

BCIs will provide many who need them a new sense of autonomy and freedom they lack, but several questions remain as the technology progresses. Who will have access, and who will pay for these devices? Is there a need to regulate these devices as they begin to augment human capability, and who will do so? What applications would be considered unethical or controversial?  What steps are needed to mitigate information, privacy, security, and military threats?  

These questions have yet to be definitively answered—and they should be answered before the technology matures. The next step of BCIs will be information transfer in the opposite direction, like with Dobelle’s original light sensing “seeing” BCI of the 1970s, or computers telling humans what they see, think, and feel. This step will bring a whole new set of questions to answer.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

What Is Haptic Technology?

Haptic technology, or haptic touch, is going to be our engagement pathway for the future. Since the start of the Covid pandemic, we are working from home more often, and much of our lives are online. However, we do not have to worry about losing physical touch. 

Haptic technology offers its users a more connected experience, and this budding industry is beginning to make its mark on companies that will likely embrace this evolving tech in the future.  

Tactile feedback technologies have been around for decades. The original XBox controller would vibrate when you were taking damage from an adversary, and phones/pagers have had a vibrate function for decades. As haptic technologies advance, they’re fast becoming powerful tools for consumer engagement. 

We will explore haptic technology’s types, advantages, and use cases, including 3D Touch, showing how it can impact a business’s objectives and growth.  

Haptic Technology Explained

Haptic technology uses hardware and software to produce tactile sensations that stimulate the user’s sense of touch, to enhance their experience. For example, the most common applications are the haptic solutions found with phones and game controllers that vibrate. Yet vibrating devices are not the only type of haptic tactile feedback: they can also include things like heat and cold, air pressure, and sound waves.  

Haptic tech can also be known as kinaesthetic communication or 3D Touch, and this technology creates new experiences with motion, vibration, and similar forces. There are two terms within haptic technology that are similar but should be distinguished: haptics and haptic feedback. 

  • Haptics: the overarching term that is used to describe the science of haptic feedback and haptic technology, as well as the neuroscience and physiology of touch.  
  • Haptic feedback: the method by which haptic technologies communicate tactile information to the users.

Haptic Applications and Modalities                                     

Immersion is a haptic tech pioneer whose technology is in over 3 billion devices worldwide. They’re the ones that tell your steering wheel to vibrate when you get too close to a car in another lane. One study on haptics showed that 94% of participants could recall objects through touch alone.  

As the global user base of haptic tech grows, it will continue to expand into novel applications, improving the user’s experience.

The Four Haptic Modalities

Let’s introduce the four main haptic modalities: vibration, button stimulation, thermal stimulation, and kinesthetic. 

Vibration

The majority of haptic experiences possess a vibration-centric feedback focus. This includes technologies like eccentric rotating mass (ERM), or linear resonant actuators (LRA). Both work to create much of the vibration we experience with mobile or wearable devices. 

LRA and ERM from Precision Microdrives

Button Stimulation

Until recently, few of our touch screens offered the tactile feedback and versatility of mechanical buttons. Therefore, we expect simulated controls to be ever more popular, such as the newer offerings from Apple (“Force Touch” and Apple’s “Haptic Touch”) and Samsung (“One UI 4”). These virtual buttons can use both haptic and audio feedback to replace the feeling of a mechanical pressure plate when fingers press the screen.

Thermal Stimulation

Thermoelectric generators create temperature-based haptic experiences for users. This effect is accomplished through the manipulation of electric current flow between alternating conductors on a device (one warm and one cold). The user can then experience different perceived temperatures.  

Tegway is producing this technology for VR headsets and other applications to add to the experience.  

Source: Tegway

Kinesthetic

Kinesthetic devices are worn on the user’s body and provide the wearer with haptic feedback sensations of mass, movement, and shape. The Dexmo force feedback haptic glove exemplifies the potential growth avenue available in the kinesthetic modality.

Types of Haptic Systems

Three primary haptic system types are now being used across several industries: graspable, touchable, and wearable. 

Graspable

Graspable devices, such as joysticks, and steering wheels, can create kinesthetic feedback that informs our nerves, tendons, joints, and muscles. Other applications, such as human-controlled robotic operations, can utilize graspable haptic systems that will provide users with tactile movement, vibration, and resistance. This allows for more realistic operations of a remote robot or a system in a virtual environment. 

The military is already using graspable haptic devices for their bomb disposal units, while NASA astronauts are using the same technology in robots that make external spacecraft repairs, preventing the need for a much more hazardous and costly spacewalk.  

Touchable

Touchable haptic technology is being more widely used by consumers, whether or not they are aware of it. Most smartphone screens use haptic technology, replacing the home button with a virtual button and the fingerprint reader under the screen. Screens respond to user movements, like touches, taps or rotations.

A new field within touchable haptic technology is called haptography, the mimicry of object textures and movements. TanvasTouch is a pad with programmable textures that can be felt by users swiping their fingers across touchscreens, trackpads, and physical surfaces, mimicking clothing materials like wool and silk before buying the items.

Source: Tanvas Touch

Wearables

Wearable haptic systems create contact sensations, relying on tactile stimuli, such as pressure, vibration, or temperature, controlled by the nerves of the user’s skin.

Virtual Reality (VR) products are the most common application of wearable haptic technology available today. VR gloves are meant to mimic real-world impressions, and they receive input from the user who is controlling their virtual avatar. VR and AR can benefit greatly from the endless consumer engagement options that wearables and haptic tech can provide.  

Haptic Technology Uses

Haptic technologies offer numerous potential advantages. Here are several current and potential use cases for touch-based solutions that tap into the benefits of haptics and can produce a better user experience.

Product Design Applications

Haptic technology can improve the user experience by working through touch optimization.

Automotive infotainment systems will begin to incorporate more haptics into their features list. Touch screens will become responsive to the user, providing multiple driver personalized settings. Additional applications for autos include pedal feedback and steering enhancements that are needed given that drive-by-wire systems are becoming more common. These help drives avoid accidents or save on gas. 

Health and Wellness

The newest advances in wearable haptics provide great opportunities within the health-tech industry.  Real-time haptic devices gather biometric data and can adjust the experience to suit the user.

Better data collection and feedback allow enhanced user experiences and, more importantly, improved health outcomes. TouchPoints has a wearable system which the TouchPoints CEO reports can reduce stress by 74% in 30 seconds.  This is done with a vibrating pattern that interrupts anxiety and builds a restful state.

Source: TouchPoints

Other companies involved with posture correction, like ergonomic furniture makers, app creators, or chiropractors, can use haptic technology to improve their products and benefit their users.  

Industrial Training

With haptic feedback, training environments can simulate natural work environments and labor conditions more closely, improving training and overall accuracy. Users can partake in virtual training scenarios in a safe, offline environment while using haptics to get a lifelike experience. 

This virtual haptic process can allow for training in assembly line usage, maintenance, safety procedures, and machinery operation. A similar haptic feedback system can also be used with product testing and many other uses, allowing users to train without risk to themselves or company property.

Accessibility

Accessibility to products and services can be improved for the visually disabled. Haptic technologies allow users to create virtual objects, they can interact with products, and even approximate the appearance of an object through touch-based sensory input. A Stanford team has developed its 2.5D display for the visually impaired to accomplish visual tasks.  

Not only will these new haptic solutions create novel markets and aid those with accessibility restrictions, but they can help ensure a company stays compliant with access regulations.

Rehabilitation

Haptics has the potential to boost the speed and effectiveness of rehabilitation programs. A Dutch startup, SenseGlove, has created a glove that uses VR simulations and haptic training to aid with virtual recovery programs.

Source: SenseGlove

Their product allows someone suffering from nerve damage due to an accident, illness, or stroke to practice daily actions. Things like pouring a cup of hot tea or cutting a steak for dinner can be done in a safe digital environment.

Remote Tasks

With an internet connection, haptic controller, and connected robot, remote tasks will become easier and far less prone to error.

Industries lacking highly skilled specialists can connect via a virtual haptic environment, allowing subject matter experts to manipulate a robot from anywhere in the world or beyond.

Closing Thoughts

Haptic technologies have been around for decades. However, the sector has seen tremendous growth in the past few years. The APAC expects the world’s haptic technology market to grow at a compounded rate of 12% through 2026

Source: APAC

Haptics is no longer a video game gimmick. New advancements and applications are becoming more widely available. Businesses should explore implementing these technologies into their operations, marketing, and consumer experiences.

By embracing this innovative technology, companies can offer their users an enhanced experience that makes them feel connected to products, services, and the brand. Haptics enables us to feel much more connected, no matter how far the distance between us may be.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AR, MR, VR, and XR

When someone enters the immersive technology arena, one of the first questions they may ask is: what’s the difference between virtual reality (VR) and augmented reality (AR)? 

These are reasonably easy to distinguish, but additional terms are less standard, such as mixed reality (MR) and extended reality (XR). Because these terms are becoming more prevalent, they will be separate initially but later become joined aspects of the metaverse as we advance. 

We will go through all of these concepts to improve our understanding and provide a few examples.  

What is VR?

Virtual reality or VR is what most prominent tech companies are pushing as the metaverse. It is an entirely immersive alternative reality that is coming to the mass market. It can be experienced by wearing a VR headset such as “Meta Quest,” formally Oculus.  

Meta Quest 2, courtesy of GameStop

Wearing a VR headset is like having a large screen directly in front of you. It surrounds your vision, and you cannot see anything else. This setup means you are entirely immersed in the digital environment. For example, a user could be at home but transported to an entirely novel world by wearing the VR headset through its immersive audio and visual experience.  

An excellent example of a VR use case is where hundreds of people in a mall could join the European Rallycross Championship winner.

Virtual reality rally with the European Rallycross Champion, Reinis Nitiss, courtesy of Lattelecom

At the shopping center, people only sat in actual racing car seats mounted to the wall. Still, the virtual reality system allowed them to be in the car and ride along with the champion from Latvia on the rallycross track at full speed.  

While Oculus was the first widespread application of a VR headset, and while it was priced out of the range of most consumers, the best-known initial application was much more accessible to consumers. It was the Google Cardboard. These simple folding cardboard items are still available and allow their users to insert their mobile phones into the headset for use as the device’s headset. 

Samsung Gear was the next, more widely accessible, application of VR using a head mount that came with every Galaxy S6 flagship phone purchase.  

Courtesy of Samsung Gear

VR has broadened beyond these initial devices. With Meta’s (Facebook’s) purchase of Oculus and their intention to take over the metaverse space, the newer generation of devices is compelling and no longer the freebie novelty item they were. VR has several entertainment uses and is now becoming more familiar with gaming. 

However, VR can also add a lot of value to other applications, such as education, manufacturing, and medicine.

AR Against VR

The main idea behind AR is to add to the reality we are experiencing at any given time rather than completely overwriting our current surroundings and entering a new world.

While VR takes you away from everything around you, AR enhances the real-life environment by placing digital objects and adding audio to the environment through a handheld device. One of the best-known augmented reality applications that emerged in 2016 was Pokemon Go.

Courtesy of Informatics

A great use case for AR is the retail sector, providing customers with the benefits once solely the domain of in-store shopping. Through AR, a visual representation–a hologram–of an item, say a piece of clothing, can be overlaid on top of the current environment. 

AR can also be an excellent tool to help customers understand the spatial orientation of objects, such as placing furniture, appliances, or fixtures in their immediate location and seeing if it fits into the potential buyer’s kitchen or office. 

Other AR companies like Magic Leap are creating lightweight AR solutions and making the technology accessible. They have industrial solutions available from the Americas to Asia. Magic Leap has been working with companies like Cisco, SentiAR, NeuroSync, Heru, Tactile, PTC, and Brainlab to refine and improve their devices for communication, training, and remote assistance for use cases in industrial environments, clinical settings, retail stores, and defense.

Courtesy of Magic Leap

The commercial AR market is developing rapidly as well, becoming more accessible for consumers to view AR and create augmented reality content. For example, Overlee has Canvass Prints, augmented photos, albums, cards, or wedding invitations that can be viewed with AR to provide a video along with the photo. In addition, some wine has added AR to their experience.

Courtesy of LivingWineLabels

AR and VR Against MR

MR is similar to AR. It will not remove you from your current surroundings, but rather the tech reads your surroundings and adds digital objects into your environment. However, unlike most AR content, which can be retrieved using a mobile device, you will need a headset from Magic Leap to experience mixed reality fully.

Although the MR and AR use cases often overlap, mixed reality can provide more significant interaction with digital content in many cases. There is no need to hold a mobile device to keep the AR illusion going. However, this requirement makes MR less accessible to the mass market. For example, GSMA data shows 10.98 billion global mobile connections.

A similar number cannot be said of MR headset owners. They are pricey and still in their early stages. There will be an extended time for this to change, but there is enormous potential. Once the hardware and software improvements are mastered and acceptance more comprehensive, this could change quickly.  

Closing Thoughts

VR has a head start in the field, being more accessible and easier to implement than AR and MR. However, VR still needs to be established. The area has several growth opportunities, including body suits and treadmills.

Courtesy of Virtuix

Though VR does have a lead, the long-term prospects for other realities are equally good. The main difference between VR and AR is the interface. The current generation of VR is bulky and can lead to dizziness or eye strain while that is not true for AR and MR. In addition, they provide many use cases for marketing, art, education, and industrial applications.  

The current devices will become less intrusive, and though we use mobile devices now, items like Google Glass (but better designed) will become more common. The future will speak on the growing number of ergonomic devices for alternative realities instead of cell phones. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltec.io.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltec.io.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Next Generation DNA Sequencing

A sequence tells a scientist the genetic information carried on a particular DNA or RNA segment. For example, the sequence can be used to determine where genes are located on a DNA strand and where regulatory instructions turn those genes on or off. 

In the mid-90s, colleges started teaching their undergraduates about DNA sequencing, with DNA sample amplification tech as the new kid on the block. The Human Genome project was ongoing, and the first human sequence had yet to be completed. 

Twenty-five years later, DNA sequencing is done regularly for many and has helped dramatically with medical and forensic needs. We are now entering a whole new era of sequencing that is the “next generation.” Let’s look at this change and how this generation alters science and medicine.

What Is Next Generation DNA Sequencing?

Next generation DNA sequencing (NGS) started gaining notoriety in the early 2010s and is a term that describes DNA sequencing technologies that have revolutionized genomic research.  

Original DNA Sequencing

To understand NGS, we need to understand the original type of DNA sequencing. 

First, a DNA strand was copied to create enough material. Then one by one, the base pairs were determined using gels with capillaries that pulled them through using electricity, the chain-termination method, or as it is commonly known, Sanger sequencing. 

The Human Genome Project used Sanger sequencing, which multiple international teams utilized to decipher the human genome, taking 13 years and $3 billion to produce the final draft released in 2003. By 2008, using several NGS techniques, the famous discoverer of DNA, James Watson’s genome sequence, was provided to him on a hard drive at an estimated cost of $1 million. 

In 2011, Apple co-founder and billionaire Steve Jobs had his DNA sequence done to help in his cancer fight for $100,000. Using NGS, a lab can now sequence an entire human genome in only one day at the cost of $100 (Ultima Genomics).  

How NGS Works

NGS involves a stepwise process (four total steps) that breaks up the sample (DNA or RNA) and sequences the parts simultaneously to get faster results.

Source: Illumina

The process is generally as follows:

1. Sample preparation involves fragmenting DNA/RNA into multiple pieces (millions for the human genome) and then adding “adapters” to the ends of the DNA fragments.

2. Cluster generation is where the separated strands are copied millions of times to produce a larger sample. 

3. Sequencing the libraries: each of the strands is sequenced with unique fluorescent markers.

4. A genomic sequence is formed by reassembling the strands using data analysis techniques. 

In principle, the NGS concept is similar to capillary electrophoresis (gels used to sequence DNA in Sanger sequencing). The critical difference is that with NGS, because the fragments are broken up, the sequences of millions of fragments are obtained in a massively parallel fashion, improving accuracy and speed while reducing the cost of sequencing.  

NGS’ Impact

Compared to the conventional Sanger sequencing method’s capillary electrophoresis, NGS’ short-read massively parallel sequencing technique is a fundamentally different approach that revolutionizes our sequencing capabilities, launching the second generation of sequencing methods.

NGS allows for the sequencing of both DNA and RNA at a drastically cheaper cost  than Sanger sequencing, and it, therefore, has revolutionized the studies of genomics and molecular biology. 

NGS’ Advantages

Because NGS can analyze both DNA and RNA samples, it’s a popular tool for functional genomics. In addition, NGS has several advantages over microarray methods.

· A priori knowledge of the genome or of any genomic features is not a requirement.  

· NGS offers single nucleotide resolution, which detects related genes and features, genetic variations, and even single base pair differences. In short, it can spot slight differences in code between two samples. 

· NGS has a higher dynamic signal range, making it easier to read.

· NGS requires less DNA or RNA as an input (nanograms of material are sufficient). 

· NGS has higher reproducibility. Because of its other advantages, the chance of an error between repeated tests is reduced.  

Most Common NGS Technologies

Three sequencing methods are and were widely used that fall under the NGS umbrella:

· Roche 454 sequencing (discontinued in 2016). This method uses a pyrosequencing technique that detects a pyrophosphate release. It uses bioluminescence (a natural light signal). Broken-up DNA stands had unique markers attached.  

Source: bioz.com

· Illumina (Solexa) sequencing. The Illumina process simultaneously identifies DNA base pairs. This is done as each base emits a different and unique fluorescent signal, continuously added to the nucleic acid chain.

Source: bioz.com

· Ion Torrent (Proton/PGM) sequencing. This kind of sequencing measures the direct release of positive Hydrogen protons when incorporating individual base pairs. They are released when added by a DNA polymerase. The Ion Torrent method differs from the previous two methods because it is not using a light measurement to do the sequencing.  

Source: bioz.com

How Is NGS Being Used?

The advent of NGS has changed the biotechnology industry. There are now new questions that scientists can ask and get the answers to that were either cost-prohibitive or the samples needed were more significant than the available material. The main applications possible with NGS include:

· Rapidly sequencing the whole genome of any life form, from prions and RNA viruses to individual humans and other mammals. 

· Utilize RNA sequencing to discover novel RNA variants and splice sites.

· Quantify mRNAs for gene expression.

· Sequence cancer samples to study rare variants, specific tumor subtypes, and more.

· Identify novel pathogens (such as viruses in bats).

What Can NGS Do? 

Notable organizations, such as Illumina, 454 Life Sciences, Pacific Biosciences, and Oxford Technologies Nanopore, are working on getting prices down so nearly anyone can get sequencing done. For example, Ultima Genomics has claimed a cost of $100 for its sequencing. Now, companies are marketing benchtop sequencing platforms that will bring these advances to as many labs as possible.  

Source: Illumina

The Illumina NextSeq Sequencer (above) is a benchtop system that can do nearly any task except “Large Whole-Genome Sequencing.” However, there is a cost of $210,000-335,000.  

We expect NGS to become more efficient and affordable over time, and these cost reductions will revolutionize several genomics-related fields. Currently, all NGS approaches demand “library preparation” after the DNA fragmentation step, where adapters are attached to the ends of the various fragments. That is generally followed by a DNA amplification step to create a library that can be sequenced with the NGS device. 

As we know more about different DNA molecules, we can develop ways to fight disease through gene therapy or particular drugs. This knowledge will help change our way of thinking about medicine.  

Third Generation Sequencing

A new class of sequencing tech, called third-generation sequencing or TGS, is being developed. These technologies can sequence single DNA molecules without the amplification step, producing longer reads than NGS. 

Single-molecule sequencing was started in 2009 by Helicos Biosciences. Unfortunately, it was slow and expensive, and the company went out of business in 2012. Nonetheless, other companies saw the benefit and took over the third-gen space.  

Pacific Bioscience has its “Single-Molecule Sequencing in Real Time (SMRT),” and Oxford Nanopore has nanopore sequencing. Each can produce long reads of 15,000 bases from a single DNA or RNA molecule. This evolution means smaller genomes can be produced without the biases or errors inherent to amplification. 

Closing Thoughts

The DNA sequence is a simple format in which a broad range of biological marvels can be projected for high-value data collection. Over the past decade, NGS platforms have become widely available, with the costs of services lowering by orders of magnitude, much faster than Moore’s law, democratizing genomics, and putting the tech into the hands of more scientists. 

Third generation sequencing will require robust protocols and practical data approaches. The coming expanse of DNA sequencing will require a complete rethinking of experimental design. Still, it will accelerate biological and biomedical research, enabling the analysis of complex systems inexpensively and at scale. We can then fight and prevent genetic diseases before they become realized issues.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. Using this, you agree that the information does not constitute investment or financial instructions. Do research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltec.io

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltec.io

The views, thoughts, and opinions expressed in this text are solely the authors’ views, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

design and development by covio.fr