Hologram Technology and AI-Based Chatbots

Integrating hologram technology and AI-based chatbots is an exciting new frontier in digital communication. Hologram technology provides a new way to interact with information and data, while AI-based chatbots are changing how people communicate with businesses and organisations. Together, these technologies offer unique opportunities for organisations to engage with customers, employees and other stakeholders in more meaningful ways.

The market for hologram technology and AI-based chatbots is snowballing. According to a report from ResearchAndMarkets.com, the global holographic display market will reach US$13.5 billion by 2026, growing at a CAGR of 26.8% from 2020 to 2026. Meanwhile, the global AI-based chatbot market is expected to reach US$1.3 billion by 2024, growing at a CAGR of 24.3% from 2019 to 2024.

What Is Hologram Technology?

Hologram technology is a cutting-edge digital visual solution that allows users to project three-dimensional images into real-world environments. The technology uses light and projection systems to create an illusion of a solid object, which can be viewed from multiple angles and appears to have depth. Holograms can be used for various applications, including entertainment, advertising, and educational purposes.

One of the significant benefits of hologram technology is that it can help businesses to stand out and capture the attention of their customers. Using holograms to showcase their products, companies can offer a unique and engaging experience that can differentiate them from their competitors. For example, hologram technology can be used to create interactive product displays that allow customers to explore a product from all angles, providing a more immersive experience.

Another benefit of hologram technology is that it can be used to improve the efficiency of communication between employees and customers. With hologram technology, employees can remotely participate in meetings and presentations, allowing them to connect with colleagues and customers from anywhere in the world. Additionally, holograms can be used to conduct virtual product demonstrations, making it easier for businesses to showcase their products and services to customers.

Furthermore, hologram technology can also be used to improve training and development opportunities for employees. With holograms, employees can receive hands-on training and experience simulations in a controlled and safe environment. This type of training can be beneficial for industries such as construction, aviation, and healthcare, where hands-on training is required to ensure the safety and well-being of employees and customers.

What Are AI-Based Chatbots?

AI-based chatbots are computer programs designed to simulate human conversations with users. They use artificial intelligence and machine learning algorithms to understand and respond to user requests in natural language. Chatbots break down the user’s input into individual words and phrases and then analyse them to determine the user’s intent. Based on the intent, the chatbot selects a response from a predetermined list of options or generates a response using deep learning algorithms.

One of the key benefits of using AI-based chatbots is that they can simultaneously handle a large volume of customer interactions, 24/7, without human intervention. This means that customers can receive fast and efficient support outside business hours. Chatbots also offer a convenient and accessible way for customers to interact with a company, as they can be integrated into websites, messaging apps, and other digital platforms.

Some of the companies that are using AI-based chatbots effectively include:

Bank of America. Bank of America’s virtual assistant, Erica, uses natural language processing and machine learning to help customers manage their finances and answer questions about their accounts.

H&M. The fashion retailer has integrated chatbots into their customer service operations, allowing customers to use messaging apps to receive fast support with their orders and returns.

Sephora. Sephora’s chatbot, named ‘Sephora Assistant’, uses AI to provide customers with personalised beauty recommendations and product information.

Overall, AI-based chatbots offer businesses a cost-effective and efficient way to interact with customers. Their capabilities constantly improve as advancements in artificial intelligence and machine learning continue.

Hologram Technology and AI-based Chatbots: Working Together

Hologram technology and AI-based chatbots can work together to provide a more immersive customer experience. With hologram technology, a computer-generated 3D image of a person or object is projected into the real world, giving the illusion of a physical presence. By integrating AI-based chatbots into this technology, businesses can create virtual assistants that can interact with customers in real time and provide personalised support.

For example, a customer might approach a holographic display and ask questions such as ‘What are your hours of operation?’ The AI-based chatbot would recognise the customer’s voice, process the request, and respond appropriately through the holographic image. The chatbot can also use the customer’s previous interactions and preferences to personalise the interaction and provide a more tailored experience.

One company that is using this technology effectively is Lowe’s, the home improvement retailer. Lowe’s has developed a virtual assistant called ‘The Lowe’s Holoroom’, which uses holographic technology and AI-based chatbots to help customers plan and visualise their home improvement projects. 

Source

Google rolled out a project in 2021 that utilises holograms in chats. According to the futuristic idea, users can transform into life-size 3D holographic replicas of themselves in virtual chat booths, giving the impression that they are in the same room as you.

The Challenges

There are several challenges in combining hologram technology with AI-based chatbots, including:

Technical complexity. Hologram technology requires specialised hardware and high-performance computing resources, making it challenging to integrate with AI-based chatbots. Additionally, the development of holographic displays that can interact in real-time with AI-based chatbots is still in its early stages.

Cost. Implementing hologram technology can be expensive, which may limit its widespread adoption. This high cost can make it difficult for companies to integrate hologram technology with AI-based chatbots, as both technologies require significant investment.

Interoperability. Hologram technology and AI-based chatbots are separate technologies, each with its own standards and protocols. Integrating these technologies seamlessly and effectively can be challenging, as they may not be designed to work together.

User experience. Creating a seamless and intuitive user experience that effectively combines hologram technology and AI-based chatbots can be difficult. A key challenge is ensuring that the technology is easy to use and provides a consistent and engaging experience for customers.

Privacy and security. Integrating hologram technology and AI-based chatbots raises privacy and security concerns, as the technology can collect and store sensitive customer data. Ensuring the security and privacy of this data is a critical challenge that must be addressed.

Despite these challenges, the potential benefits of combining hologram technology with AI-based chatbots are significant. As technology advances, we will likely see continued innovation and progress in this field.

Closing Thoughts

It is difficult to say whether hologram technology is the future of AI-based chatbots, as these technologies are constantly evolving. While hologram technology has the potential to provide a more interactive customer experience, it also presents several challenges, such as the need for specialised hardware and high-performance computing resources. Additionally, the cost of implementing hologram technology is currently high, which may limit its widespread adoption.

That being said, AI-based chatbots and hologram technology are two of the most promising advancements today, and they have the potential to complement each other in many ways. As both technologies continue to advance, we will likely see more companies exploring the possibilities of integrating them to create new and innovative customer experiences.

While hologram technology may play a role in the future of AI-based chatbots, it is too soon to predict the exact trajectory of this field. The integration of these technologies will continue to evolve, and we will likely see various approaches to combining AI-based chatbots and hologram technology in the future.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltec.io

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltec.io

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Brain-Computer Interfaces

Brain-computer interfaces are devices that allow people to control machines with their thoughts. This technology has been the stuff of science fiction and even children’s games for years. 

Mindflex game by Mattell

On the more advanced level, brain-computer technology remains highly experimental but has vast possibilities. First to mind (no pun intended), would be to aid those with paralysis in creating electrical impulses that would let them regain control of their limbs. Second, the military would like to see its service members operating drones or missiles hands-free on the battlefield.  

There are also concerns raised when a direct connection is made between a machine and the brain. For example, such a connection could give users an unfair advantage, enhancing their physical or cognitive abilities. It also means hackers could steal data related to the user’s brain signals.  

With this article, we explore several opportunities and issues that are related to brain-computer interfaces.  

Why Do Brain-Computer Interfaces Matter?

Brain-computer interfaces allow their users to control machines with their thoughts. Such interfaces can aid people with disabilities, and they can enhance the interactions we have with computers. The current iterations of brain-computer interfaces are primarily experimental, but commercial applications are just beginning to appear. Questions about ethics, security, and equity remain to be addressed. 

What Are Brain-Computer Interfaces? 

A brain-computer interface enables the user to control an external device by way of their brain signals.  A current use of a BCI that has been under development is one that would allow patients with paralysis to spell words on a computer screen

Additional use cases include: a spinal cord injury patient regaining control of their upper body limbs, a BCI-controlled wheelchair, or a noninvasive BCI that would control robotic limbs and provide haptic feedback with touch sensations. All of this would allow patients to regain autonomy and independence.

Courtesy of Atom Touch

Beyond the use of BCIs for the disabled, the possibilities for BCIs that augment typical human capabilities are abundant. 

Neurable has taken a different route and has created headphones that can make you more focused, not requiring a user’s touch to control, but can work with a wink or nod and will be combined with VR for a better experience.

Courtesy of Neurable

How do BCIs Work?

Training

Generally, a new BCI user will go through an iterative training process. The user learns how to produce signals that the BCI will recognize, and then the BCI will take those signals and translate them for use by way of a machine learning algorithm. Machine learning is useful for correctly interpreting the user’s signals, as it can also be trained to provide better results for that user over time. 

Connection

BCIs will generally connect to the brain in two ways: through wearable or implanted devices. 

Implanted BCIs are often surgically attached directly to brain tissue, but Synchron has developed a catheter-delivered implant that taps into blood vessels in the chest to capture brain signals. The implants are more suitable for those with severe neuromuscular disorders and physical injuries where the cost-benefit is more favorable. 

A person with paralysis could regain precise control of a limb by using an implanted BCI device attached to specific neurons; any increase in function would be beneficial, but the more accurate, the better.  Implanted BCIs can measure signals directly from the brain, reducing interference from other body tissues. However, most implants will pose other risks, primarily surgical-related like infection and rejection. Some implanted devices can reduce these risks by placing the electrodes on the brain’s surface using a method called electrocorticography or ECoG.  

Courtesy of the Journal of Neurosurgery

Wearable BCIs, on the other hand, generally require a cap containing conductors which measure brain activity detectible on the scalp. The current generation of wearable BCIs is more limited, such as only for augmented and virtual reality, gaming, or controlling an industrial robot. 

Most wearable BCIs are using electroencephalography (EEG) with electrodes contacting the scalp to measure the brain’s electrical activity. A more recent and emerging wearable method incorporates functional near-infrared spectroscopy (fNIRS), where near-infrared light is shined through the skull to measure blood flow which, when interpreted, can indicate information like the user’s intentions. 

To enhance their usefulness, researchers are developing BCIs that utilize portable methods for data collection, including wireless EEGs. These advancements allow users to move freely. 

The History of BCIs

Most BCIs are still considered experimental. Researchers began testing wearable BCI tech in the early 1970s, and the first human-implanted BCI was Dobelle’s first prototype, implanted into “Jerry,” a man blinded in adulthood, in 1978. A BCI with 68 electrodes was implanted into Jerry’s visual cortex. The device succeeded in producing phosphenes, the sensation of “seeing” light.  

In the 21st century, BCI research increased significantly, with the publication of thousands of research papers. Among that was Tetraplegic Matt Nagle, who became the first person to control an artificial hand using a BCI in 2005. Nagle was part of Cyberkinetics Neurotechnology’s first nine-month human trial of their BrainGate chipimplant.  

Even with the advances, it is estimated that fewer than 40 people worldwide have implanted BCIs, and all of them are considered experimental. The market is still limited, and projections are that the total market will only reach $5.5 million by 2030. Two significant obstacles to BCI development are that each user generates their own brain signals and those signals are difficult to measure.  

The majority of BCI research has historically focused on biomedical applications, helping those with disabilities from injury, neurological disorder, or stroke. The first BCI device to receive Food and Drug Administration authorization was granted in April 2021. The device (IpsiHand) uses a wireless EEG headset to help stroke patients regain arm and hand control.  

Concerns With BCI

Legal and security implications of BCIs are the most common concerns held by BCI researchers. Because of the prevalence of cyberattacks already, there is an understandable concern of hacking or malware that could be used to intercept or alter brain signal data stored on a device like a smartphone.

The US Department of Commerce (DoC) is reviewing the security implications of exporting BCI technology. The concern is that foreign adversaries could gain an intelligence or military advantage. The DoC’s decision will affect how BCI technology is used and shared abroad.

Social and Ethical Concerns

Those in the field have also considered BCI’s social and ethical implications. The costs for wearable BCIs can range from hundreds even up to thousands of dollars, and this price would likely mean unequal access. 

Implanted BCIs cost much more. The training process for some types of BCIs is significant and could be a burden on users. It has been suggested that if the translations of BCI signals for speech are inaccurate, then great harm could result. 

The Opportunities of BCIs

The main opportunities that BCIs will initially provide are to help those paralyzed by injury or disorders to regain control of their bodies and communicate. This is already seen in the current research, but in the long term, this is only a steppingstone. 

The augmentation of human capability, be it on the battlefield, in aerospace, or in day-to-day life, is the longer-term goal. BCI robots could also aid humans with hazardous tasks or hazardous environments, such as radioactive materials, underground mining, or explosives removal.  

Finally, the field of brain research can be enhanced with a greater number of BCIs in use. Understanding the brain will be easier with more data, and researchers have even used a BCI to detect the emotions of people in minimally conscious or vegetative states.  

Closing Thoughts

BCIs will provide many who need them a new sense of autonomy and freedom they lack, but several questions remain as the technology progresses. Who will have access, and who will pay for these devices? Is there a need to regulate these devices as they begin to augment human capability, and who will do so? What applications would be considered unethical or controversial?  What steps are needed to mitigate information, privacy, security, and military threats?  

These questions have yet to be definitively answered—and they should be answered before the technology matures. The next step of BCIs will be information transfer in the opposite direction, like with Dobelle’s original light sensing “seeing” BCI of the 1970s, or computers telling humans what they see, think, and feel. This step will bring a whole new set of questions to answer.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

What Is Haptic Technology?

Haptic technology, or haptic touch, is going to be our engagement pathway for the future. Since the start of the Covid pandemic, we are working from home more often, and much of our lives are online. However, we do not have to worry about losing physical touch. 

Haptic technology offers its users a more connected experience, and this budding industry is beginning to make its mark on companies that will likely embrace this evolving tech in the future.  

Tactile feedback technologies have been around for decades. The original XBox controller would vibrate when you were taking damage from an adversary, and phones/pagers have had a vibrate function for decades. As haptic technologies advance, they’re fast becoming powerful tools for consumer engagement. 

We will explore haptic technology’s types, advantages, and use cases, including 3D Touch, showing how it can impact a business’s objectives and growth.  

Haptic Technology Explained

Haptic technology uses hardware and software to produce tactile sensations that stimulate the user’s sense of touch, to enhance their experience. For example, the most common applications are the haptic solutions found with phones and game controllers that vibrate. Yet vibrating devices are not the only type of haptic tactile feedback: they can also include things like heat and cold, air pressure, and sound waves.  

Haptic tech can also be known as kinaesthetic communication or 3D Touch, and this technology creates new experiences with motion, vibration, and similar forces. There are two terms within haptic technology that are similar but should be distinguished: haptics and haptic feedback. 

  • Haptics: the overarching term that is used to describe the science of haptic feedback and haptic technology, as well as the neuroscience and physiology of touch.  
  • Haptic feedback: the method by which haptic technologies communicate tactile information to the users.

Haptic Applications and Modalities                                     

Immersion is a haptic tech pioneer whose technology is in over 3 billion devices worldwide. They’re the ones that tell your steering wheel to vibrate when you get too close to a car in another lane. One study on haptics showed that 94% of participants could recall objects through touch alone.  

As the global user base of haptic tech grows, it will continue to expand into novel applications, improving the user’s experience.

The Four Haptic Modalities

Let’s introduce the four main haptic modalities: vibration, button stimulation, thermal stimulation, and kinesthetic. 

Vibration

The majority of haptic experiences possess a vibration-centric feedback focus. This includes technologies like eccentric rotating mass (ERM), or linear resonant actuators (LRA). Both work to create much of the vibration we experience with mobile or wearable devices. 

LRA and ERM from Precision Microdrives

Button Stimulation

Until recently, few of our touch screens offered the tactile feedback and versatility of mechanical buttons. Therefore, we expect simulated controls to be ever more popular, such as the newer offerings from Apple (“Force Touch” and Apple’s “Haptic Touch”) and Samsung (“One UI 4”). These virtual buttons can use both haptic and audio feedback to replace the feeling of a mechanical pressure plate when fingers press the screen.

Thermal Stimulation

Thermoelectric generators create temperature-based haptic experiences for users. This effect is accomplished through the manipulation of electric current flow between alternating conductors on a device (one warm and one cold). The user can then experience different perceived temperatures.  

Tegway is producing this technology for VR headsets and other applications to add to the experience.  

Source: Tegway

Kinesthetic

Kinesthetic devices are worn on the user’s body and provide the wearer with haptic feedback sensations of mass, movement, and shape. The Dexmo force feedback haptic glove exemplifies the potential growth avenue available in the kinesthetic modality.

Types of Haptic Systems

Three primary haptic system types are now being used across several industries: graspable, touchable, and wearable. 

Graspable

Graspable devices, such as joysticks, and steering wheels, can create kinesthetic feedback that informs our nerves, tendons, joints, and muscles. Other applications, such as human-controlled robotic operations, can utilize graspable haptic systems that will provide users with tactile movement, vibration, and resistance. This allows for more realistic operations of a remote robot or a system in a virtual environment. 

The military is already using graspable haptic devices for their bomb disposal units, while NASA astronauts are using the same technology in robots that make external spacecraft repairs, preventing the need for a much more hazardous and costly spacewalk.  

Touchable

Touchable haptic technology is being more widely used by consumers, whether or not they are aware of it. Most smartphone screens use haptic technology, replacing the home button with a virtual button and the fingerprint reader under the screen. Screens respond to user movements, like touches, taps or rotations.

A new field within touchable haptic technology is called haptography, the mimicry of object textures and movements. TanvasTouch is a pad with programmable textures that can be felt by users swiping their fingers across touchscreens, trackpads, and physical surfaces, mimicking clothing materials like wool and silk before buying the items.

Source: Tanvas Touch

Wearables

Wearable haptic systems create contact sensations, relying on tactile stimuli, such as pressure, vibration, or temperature, controlled by the nerves of the user’s skin.

Virtual Reality (VR) products are the most common application of wearable haptic technology available today. VR gloves are meant to mimic real-world impressions, and they receive input from the user who is controlling their virtual avatar. VR and AR can benefit greatly from the endless consumer engagement options that wearables and haptic tech can provide.  

Haptic Technology Uses

Haptic technologies offer numerous potential advantages. Here are several current and potential use cases for touch-based solutions that tap into the benefits of haptics and can produce a better user experience.

Product Design Applications

Haptic technology can improve the user experience by working through touch optimization.

Automotive infotainment systems will begin to incorporate more haptics into their features list. Touch screens will become responsive to the user, providing multiple driver personalized settings. Additional applications for autos include pedal feedback and steering enhancements that are needed given that drive-by-wire systems are becoming more common. These help drives avoid accidents or save on gas. 

Health and Wellness

The newest advances in wearable haptics provide great opportunities within the health-tech industry.  Real-time haptic devices gather biometric data and can adjust the experience to suit the user.

Better data collection and feedback allow enhanced user experiences and, more importantly, improved health outcomes. TouchPoints has a wearable system which the TouchPoints CEO reports can reduce stress by 74% in 30 seconds.  This is done with a vibrating pattern that interrupts anxiety and builds a restful state.

Source: TouchPoints

Other companies involved with posture correction, like ergonomic furniture makers, app creators, or chiropractors, can use haptic technology to improve their products and benefit their users.  

Industrial Training

With haptic feedback, training environments can simulate natural work environments and labor conditions more closely, improving training and overall accuracy. Users can partake in virtual training scenarios in a safe, offline environment while using haptics to get a lifelike experience. 

This virtual haptic process can allow for training in assembly line usage, maintenance, safety procedures, and machinery operation. A similar haptic feedback system can also be used with product testing and many other uses, allowing users to train without risk to themselves or company property.

Accessibility

Accessibility to products and services can be improved for the visually disabled. Haptic technologies allow users to create virtual objects, they can interact with products, and even approximate the appearance of an object through touch-based sensory input. A Stanford team has developed its 2.5D display for the visually impaired to accomplish visual tasks.  

Not only will these new haptic solutions create novel markets and aid those with accessibility restrictions, but they can help ensure a company stays compliant with access regulations.

Rehabilitation

Haptics has the potential to boost the speed and effectiveness of rehabilitation programs. A Dutch startup, SenseGlove, has created a glove that uses VR simulations and haptic training to aid with virtual recovery programs.

Source: SenseGlove

Their product allows someone suffering from nerve damage due to an accident, illness, or stroke to practice daily actions. Things like pouring a cup of hot tea or cutting a steak for dinner can be done in a safe digital environment.

Remote Tasks

With an internet connection, haptic controller, and connected robot, remote tasks will become easier and far less prone to error.

Industries lacking highly skilled specialists can connect via a virtual haptic environment, allowing subject matter experts to manipulate a robot from anywhere in the world or beyond.

Closing Thoughts

Haptic technologies have been around for decades. However, the sector has seen tremendous growth in the past few years. The APAC expects the world’s haptic technology market to grow at a compounded rate of 12% through 2026

Source: APAC

Haptics is no longer a video game gimmick. New advancements and applications are becoming more widely available. Businesses should explore implementing these technologies into their operations, marketing, and consumer experiences.

By embracing this innovative technology, companies can offer their users an enhanced experience that makes them feel connected to products, services, and the brand. Haptics enables us to feel much more connected, no matter how far the distance between us may be.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AR, MR, VR, and XR

When someone enters the immersive technology arena, one of the first questions they may ask is: what’s the difference between virtual reality (VR) and augmented reality (AR)? 

These are reasonably easy to distinguish, but additional terms are less standard, such as mixed reality (MR) and extended reality (XR). Because these terms are becoming more prevalent, they will be separate initially but later become joined aspects of the metaverse as we advance. 

We will go through all of these concepts to improve our understanding and provide a few examples.  

What is VR?

Virtual reality or VR is what most prominent tech companies are pushing as the metaverse. It is an entirely immersive alternative reality that is coming to the mass market. It can be experienced by wearing a VR headset such as “Meta Quest,” formally Oculus.  

Meta Quest 2, courtesy of GameStop

Wearing a VR headset is like having a large screen directly in front of you. It surrounds your vision, and you cannot see anything else. This setup means you are entirely immersed in the digital environment. For example, a user could be at home but transported to an entirely novel world by wearing the VR headset through its immersive audio and visual experience.  

An excellent example of a VR use case is where hundreds of people in a mall could join the European Rallycross Championship winner.

Virtual reality rally with the European Rallycross Champion, Reinis Nitiss, courtesy of Lattelecom

At the shopping center, people only sat in actual racing car seats mounted to the wall. Still, the virtual reality system allowed them to be in the car and ride along with the champion from Latvia on the rallycross track at full speed.  

While Oculus was the first widespread application of a VR headset, and while it was priced out of the range of most consumers, the best-known initial application was much more accessible to consumers. It was the Google Cardboard. These simple folding cardboard items are still available and allow their users to insert their mobile phones into the headset for use as the device’s headset. 

Samsung Gear was the next, more widely accessible, application of VR using a head mount that came with every Galaxy S6 flagship phone purchase.  

Courtesy of Samsung Gear

VR has broadened beyond these initial devices. With Meta’s (Facebook’s) purchase of Oculus and their intention to take over the metaverse space, the newer generation of devices is compelling and no longer the freebie novelty item they were. VR has several entertainment uses and is now becoming more familiar with gaming. 

However, VR can also add a lot of value to other applications, such as education, manufacturing, and medicine.

AR Against VR

The main idea behind AR is to add to the reality we are experiencing at any given time rather than completely overwriting our current surroundings and entering a new world.

While VR takes you away from everything around you, AR enhances the real-life environment by placing digital objects and adding audio to the environment through a handheld device. One of the best-known augmented reality applications that emerged in 2016 was Pokemon Go.

Courtesy of Informatics

A great use case for AR is the retail sector, providing customers with the benefits once solely the domain of in-store shopping. Through AR, a visual representation–a hologram–of an item, say a piece of clothing, can be overlaid on top of the current environment. 

AR can also be an excellent tool to help customers understand the spatial orientation of objects, such as placing furniture, appliances, or fixtures in their immediate location and seeing if it fits into the potential buyer’s kitchen or office. 

Other AR companies like Magic Leap are creating lightweight AR solutions and making the technology accessible. They have industrial solutions available from the Americas to Asia. Magic Leap has been working with companies like Cisco, SentiAR, NeuroSync, Heru, Tactile, PTC, and Brainlab to refine and improve their devices for communication, training, and remote assistance for use cases in industrial environments, clinical settings, retail stores, and defense.

Courtesy of Magic Leap

The commercial AR market is developing rapidly as well, becoming more accessible for consumers to view AR and create augmented reality content. For example, Overlee has Canvass Prints, augmented photos, albums, cards, or wedding invitations that can be viewed with AR to provide a video along with the photo. In addition, some wine has added AR to their experience.

Courtesy of LivingWineLabels

AR and VR Against MR

MR is similar to AR. It will not remove you from your current surroundings, but rather the tech reads your surroundings and adds digital objects into your environment. However, unlike most AR content, which can be retrieved using a mobile device, you will need a headset from Magic Leap to experience mixed reality fully.

Although the MR and AR use cases often overlap, mixed reality can provide more significant interaction with digital content in many cases. There is no need to hold a mobile device to keep the AR illusion going. However, this requirement makes MR less accessible to the mass market. For example, GSMA data shows 10.98 billion global mobile connections.

A similar number cannot be said of MR headset owners. They are pricey and still in their early stages. There will be an extended time for this to change, but there is enormous potential. Once the hardware and software improvements are mastered and acceptance more comprehensive, this could change quickly.  

Closing Thoughts

VR has a head start in the field, being more accessible and easier to implement than AR and MR. However, VR still needs to be established. The area has several growth opportunities, including body suits and treadmills.

Courtesy of Virtuix

Though VR does have a lead, the long-term prospects for other realities are equally good. The main difference between VR and AR is the interface. The current generation of VR is bulky and can lead to dizziness or eye strain while that is not true for AR and MR. In addition, they provide many use cases for marketing, art, education, and industrial applications.  

The current devices will become less intrusive, and though we use mobile devices now, items like Google Glass (but better designed) will become more common. The future will speak on the growing number of ergonomic devices for alternative realities instead of cell phones. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltec.io.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltec.io.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Brain-Computer Interfaces and the Metaverse

What are the commercial promises of brain-computer interfaces, and how will they further connect us to the promises of the metaverse? These interfaces, initially sensory (on the scalp or skin), and possibly through brain implants in the future, could be the eventual platforms transforming all parts of our diverse societies. 

The Brain-Computer Merge

You may not have noticed, but with each passing day, we are slowly merging more and more with the technology around us. Our smartphones are our tools for instant communication and the answers to many of our questions, allowing us to focus on other things rather than that which occupied our minds in the past. 

We have implanted pacemakers and defibrillators that tell the cardiologist all about our hearts and correct our irregularities. We have implanted lenses in our eyes to fix vision issues. The technology around us now, especially with our smartphones, will not represent the most common interface in our future. 

What our smartphones do, and much more will likely be incorporated into our bodies. Though google glass was not a successful project, many of its users were the wrong targets, and it was also burdened with tech glitches and security concerns. It did, however, show that we could bring technology closer, supplying useful information and sending sound directly into the ear with bone conduction. 

Source: The Verge

As brain-computer interface (BCI) systems progress, they will be an essential step forward in the brain-computer merge. A BCI’s role is the interpretation of the user’s neural activity. A BCI is just part of an environment that is more wired, has more sensors, and is digitally connected.   

With the current generation of experimental brain-computer interfaces, using only their minds, humans can play video games, articulate prosthetic limbs, control their own limbs, work wheelchairs, and more. BCIs have the potential to communicate with patients that suffer from Alzheimer’s disease, head injuries, and stroke, allowing them to control computers that help them speak.  

BCI technology will likely take a turn for enhancing sensory connection and communication. The most common use for BCI technology is the directional control of a computer cursor. Imagine moving your mouse and clicking without the need for the mouse. 

This is already being done only with electrophysiological signals (brain and blood signals to a system of sensors). This BCI control system has already been utilized by users (both humans and animals) to control the external world without the need for conventional neuromuscular pathways (speech).  

Brain-Computer Interfaces Alongside the Metaverse

The metaverse is a fusing of the real and digital worlds. It’s either an entirely simulated digital environment, as is the case of virtual reality (VR), or an overlay of a digital experience to the real world with augmented reality (AR). 

Thought of in a different way, the metaverse can be a platform where users can feel the real through an animated or digital world encounter. The metaverse that combines augmented reality with the real world can give us more immersive, next-level platforms. The metaverse is intended to make our lives more natural and “realistic,” including socializing, work, and entertainment.  

Scientists, researchers, corporations, and entrepreneurs are making strides with their new and advanced applications. Many of these applications are intended to augment human abilities, fulfilling desires to be stronger, smarter, and better looking. 

Exoskeleton by SuitX

With the BCI connection, it’s believed that part of this initiative will transform technology, medicine, society, and the future. Current devices can cultivate human abilities that exceed the former standards and are not dissimilar to the great powers of Iron Man. SuitX’s Exoskeleton can reduce lower back loads by 60%.  

As these technologies continue to merge with BCIs, it’s believed that the opportunity to augment human capability will be even greater.  

Elon Musk’s Neuralink has been working on a consumer-intended high-bandwidth BCI that focuses on four parts of the brain. 

Source: Neuralink

Neuralink has shared their video of a macaque playing “MindPong” by way of chips embedded in a few regions of its brain. The primate was trained to play the game by simply thinking about moving its hands. The goal is for future “Neuralinks” to tie the brain to the body’s motor and sensory cortices, thereby enabling people with paraplegia the ability to walk again. 

Inside a Metaverse

Technical training inside a metaverse consists of providing technicians with advanced features and simulations capable of operating 3D representations of complex systems, instruments, or machinery. 

BCIs with simulation technology will combine to empower the metaverse, allowing remote support and maintenance of devices and equipment. This could be a matter of connecting with experts who would control the repair of the system by thinking about moving their own hands to make repairs. 

This would allow for the “switching on” of virtual reality engineers and technicians when an unforeseen repair occurs. It’s not so far of a step beyond this to think of the same procedure for doctors and surgeons.

Dating and socializing in virtual reality may become a common occurrence with virtual movies and museum tours. Such interactions could be enhanced with the direct brain interface that enriches the mind of our partners, adding to positive experiences from the external environment (“I wish you could see things from my point of view” would be possible).  

Closing Thoughts

Applications of brain-computer interfaces are spread across many fields and are not limited to military or medical purposes. The fullest realization of these technologies will certainly take time and incremental improvements, but they will be well-suited for the metaverse. 

This process will require significant testing and a long period of adoption. However, brain interfaces can be game changers in their lives and incredible experiences for many.  

We could eventually see a future that no longer has brain-computer interfaces but goes toward the next step of direct brain-to-brain connections. This new type of connection is a very exciting step that would bring humans closer together, allowing us to understand how we all experience the real and virtual worlds.  

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

​​Data and Machines of the Future

As we move toward our future, we increasingly notice two concepts that have always been at odds: data and computing power. This rift runs as follows: we have more data than we can process, while much of that data remains subpar for processing. 

Data and computing power have been such that the data we have has always been more than the data we can process, and the data we have is not always the best data to be processing. We are reaching the point where these two issues are starting to blur.

First, we are creating computers that have the ability to process the vast amounts of data that we are now creating. Second, we are creating synthetic data that may not be “real.” However, if it’s “authentic,” the users may prefer it. Let’s discuss these two topics and how they will interact in the future.  

Rise of the Machines

A new class of computers is emerging that stretches the boundaries of problems that they have the capability to solve. These new devices from three defined areas are pushing aside the limits of Moore’s Law and creating a new computing capability curve. 

Companies and industries have always been defined by their limitations, the currently unsolvable problems. However, these new machines may help companies solve and move beyond the presently unsolvable.

These ongoing challenges define the boundaries of companies and their core products, services, and overall strategies at any time. For decades, the financial services industry has operated under the assumption that predicting the movement of the stock market and accurately modeling market risk is either hard or impossible to do, but it may not be in the near future.  

When combined, there are emerging technologies that can potentially make these core challenges achievable. With quantum computing as the next level of problem-solving, combined with high-performance computers (HPCs) or massive parallel processing supercomputers (MPPSCs), the ability to use never-before-seen swaths of data becomes possible. 

As business leaders, we must create partnerships and inroads to understanding the latest technological developments in the computing field and in our industry at large. This creative process includes experimentation and the design of a skills pipeline that will lead to future success.  

New Data Types

With the increases in chatbots, augmented reality (AR), and synthetic data (including deep fake audio, images, and video), we are forced to evaluate what is “real” and what is not. When we see news of the latest global issue, we want to know that it is real, but do we care if the newest advertisement for our favorite snack is? 

We may even prefer the unreal. Say we are discussing a sensitive health issue with a synthetic (i.e. AR) nurse or we are training an AI using synthesized data that is designed to remove historical discrimination–unreal may be the preference. 

As technology progresses, we will shift from a desire for the real to a desire for the authentic, and authenticity is defined by four foundational measures:

1.     Provenance. What is the source of the data?

2.     Policy. How has the data been restricted?

3.     People. Who is responsible for the data?

4.     Purpose. What is the data trying to accomplish?

Synthetic data aims to correct data bias, protect data privacy, and make AI algorithms more fair and secure. Synthetic content helps design more seamless experiences and provide novel interactions with AI that saves on time, energy, and reduced costs. However, the use of synthetic data will be complex and controversial.  

Data and Computing

High performance is a growing necessity. IDC reported that 64.2 zettabytes (ZB) of data was created or replicated in 2020, and this is expected to triple to 180ZB by 2025. Only 10.6% of the 2020 data was useful for analysis; and of that, only 44% was used.  

The answer to dealing with this massive issue is through high-performance computing (HPC), and while the field of HPC is not new, the computing potential has expanded. The smartphones of today contain the processing power of supercomputers three decades ago. 

Now, GPUS, ASICs, and other purpose-built processors, like D1 Dojo chips specifically for computer vision neural networks, and which will be the foundation of autonomous driving tech, are pushing HPC capability to new levels.

The Unreal World

Another issue for the future is the unreal. Dealing with a call center that has a bot that does not understand your request is maddening. But the potential for AI and its use is already becoming indispensable in business. It constantly improves, and what was once a “differentiator” for businesses has now become a necessity

Synthetic data is being used for AI model training in cases where real-world data cannot apply. This “realish” yet unreal data can be shared, protecting confidentiality and privacy while maintaining statistical properties. Further, synthetic data can counter human-born biases to increase diversity.

However, synthetic data comes with significant challenges. The world of deep fakes and disinformation is causing predictable damage, and the use of AI algorithms in social media creates echo chambers, filter bubbles, and algorithmic confounding that can reinforce false narratives. 

New Computing Technologies

Quantum Computing

While HPCs and supercomputers are able to process more data, they’re simply new versions of the same old stuff. 

The next generation of computer evolution will likely be when quantum computers begin to solve problems that we consider obdurate. Quantum research is still in its infancy but is likely to follow an exponential curve. 

The estimated number of qubits needed to crack the current level of cybersecurity is several thousand, and the devices that are being designed by IBM and Google have reached an announced 127 while others have claimed to reach 256 qubits. However, this is up from 53 for IBM and 56 for Google in 2019. 

A doubling every two years sounds like Moore’s law. However, Moore’s law is not the same for quantum computing. Qubits’ property of entanglement means that by adding one more qubit to a quantum system, you double the information the quantum system can compute. The move from 53 to 127 means computing power has doubled 74 times in just three years.  

Mimicking and Using Nature

The other technology that is reshaping computing is taking lessons from nature. Biology-inspired computing takes its ideas from a 3.7-billion-year-old system. There are two subclasses of biocomputing:

1.     Biomimicry, or computing systems that draw their inspiration from biological processes.

2.     Biocomputing, or systems that use biological processes to conduct computational functions.  

Biomimicry systems have been used in chip architectures and data science algorithms.  However, we are now beginning to see machines that are not only mimicking biological operations but are leveraging biological operations and processes. 

Data storage is a biocomputing darling for a good reason. Based on natural DNA found on Earth, one estimate predicts that an exabyte of data (1 million Terabytes) could be stored in one cubic centimeter of space and has the potential to persist for over 700,000 years.  

Moving Forward

How do businesses incorporate new forms of data and new ways of computing into practice? 

The first action is to begin evaluating how these different technologies will shape the industry and your operations. What are the problems that are considered a cost of doing business, and what would change if these problems could be solved? How can synthetic data be used to improve your current business functions, and what things need to be looked out for that could have a negative impact? What kind of machines could affect your business first?

Those who desire to take an active role in shaping the future should consider what hardware can be used to solve the currently unsolvable.  

No matter the industry, the critical step of forging partnerships is essential. Most businesses have skills and capabilities that they can gain from such partnerships, and many industry problems require comprehensive scale collaboration.  

Alliances and partnerships formed today will be the outliers of the industry tomorrow.

Closing Thoughts

We have always been defined by our unanswerable questions, and the advent of computers has helped us to solve grand challenges. We are also facing a synthetic, unreal world that is intended to improve our lives, but–depending on the user’s intent, such data and its progeny can be a tool of malicious intent.

Both of these concepts are at the point where business leaders must consider them no longer abstract. They’re rapidly improving, and their impacts on industries will be profound in the coming decade. The unsolvable will be a thing of the past, and what we believe is real will come into question.  

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Web 4.0 and Beyond

Two fundamental questions surround Web 4.0: Will virtual reality (VR) gain traction in society? What will social media look like in the future? Even if we are aware of the exponential rise of technology, it’s still challenging to forecast where we will be in ten years.

Although details are hard to predict, reviewing the past reveals a clear direction for what Web 4.0 may entail. Web 3.0 is still in its infancy, but experts are already romanticizing how Web 4.0 could change the world. 

To illustrate how we arrived at the current state of Web 3.0, we will review each previous version of the internet. This article will then hypothesize on a possible Web 4.0. 

Web 1.0: Beginning

The term “Web 1.0” refers to the beginnings of the internet.

Defense Advanced Research Projects Agency (DARPA) research initiatives helped develop protocols like TCP/IP, which enabled networked computers to communicate with one another. The study quickly earned the name “Internet.” Web pages connected to one another comprised the majority of this internet.

Web 1.0’s state was very much read-only, in contrast to the web as it is now. Since static pages made up most online pages, there were hardly any interactive features.

There will mostly be a minimalist approach to design then. Each webpage typically consisted of a combination of text and images over a background of pure white.

The internet’s initial iteration was ground-breaking. It was a brand-new system that allowed anyone with access to the system, wherever in the world, to share information.

Users have little else to do but browse the amount of information. The web eventually needed to change as more people began using the internet.

Web 2.0: Interactive

The need for online collaboration, sharing, and communication skyrocketed in the late 1990s.

Because of this, many engineers predicted a new era for the internet. “Web 2.0” is a word that was created to define this new era by writer and web designer Darcy Dinucci.

It signalled a paradigm change from static web pages to interactive web applications. People created online communities on Web 2.0 websites.

Soon, several social networking sites, blogs, wikis, and platforms for sharing material were developed. Users can now add to a website’s content instead of just reading it for the first time. It sparked the development of online shopping, where customers may not only buy and sell goods but also post reviews.

Platforms for sharing and publishing information were increasingly popular in the 2010s.

Sites like Youtube, Instagram, and Facebook have figured out how to make money off the internet’s new function as an outlet for ideas and creative expression.

Your online identity has developed into a brand-new aspect of you. With the advent of smartphones, the internet quickly became portable, making it accessible to billions of people.

This swiftly made room for the web’s subsequent expansion.

Web 3.0: Big Data and AI

Today marks the era of the big data. 

These enormous platforms own vast amounts of priceless data thanks to the increasing volume of information being posted to the internet, which is now the foundation of our economy.

Data is the new oil, according to a 2014 article in WIRED Magazine. The internet’s widespread use and the amount of data have set the stage for the web’s next chapter.

The precise concept of Web 3.0 differs depending on what you read.

The World Wide Web was created by Tim Berners-Lee, who calls it a “semantic web” that provides access to an “unbelievable data source.”

According to his forecast, the web will soon be able to comprehend the intricate relationships between concepts in the real world. With a new emphasis on users interacting with artificial intelligence, Web 3.0 moves beyond human-to-human connection.

We now use this part of the internet in our everyday lives.

Thanks to algorithms, our news feeds, and suggestions are already filled with information pertinent to us. These algorithms will only get stronger as more data is gathered.

What Is Web 4.0?

The user journey is center stage and the primary focus with Web 4.0. 

Most people couldn’t imagine a world without voice commands, touchscreen interfaces, or auto-correction. Despite these developments, we haven’t yet achieved a completely smooth experience.

Being constrained by how quickly we can speak or type, our thoughts require enormous effort to communicate digitally. The line between the mind and the computer should become less distinct with the next significant advancement in web usage, Web 4.0.

The “symbiotic web,” or the symbiotic contact between man and machine, is a bold prediction of Web 4.0. Soon, AI will have developed to the point where it can comprehend our own ideas and successfully browse the web.

On a 2D screen, the web might not be adequately represented. Virtual reality or augmented reality technologies may supplement or form the foundation of Web 4.0. The intention remains to completely capture and provide the complete, human experience. 

The internet has gradually adapted to focus on graphics and video instead of text. It’s hardly a stretch to argue that we may soon experience lifelike experiences that fully immerse users while they explore a new digital environment.

What’s the Technology Behind Web 4.0?

Several ground-breaking technologies may form the basis of Web 4.0. 

Elon Musk’s business Neuralink has successfully tested wireless brain implants on animals. These brain-computer interfaces (BCIs) will soon enable us to communicate easily with and operate the gadgets around us. 

Alternatively, tech behemoths are moving aggressively into the augmented and virtual reality fields. Metaverse technology is in development by businesses like Facebook and Microsoft. It could eventually replace traditional human-to-human connections.

The way we interact with the web is being revolutionized by augmented reality (AR) applications like nReal and eye-tracking technologies.

To handle the enormous quantity of input required from both the human brain and the physical world around us, Web 4.0 will need AI and sophisticated machine learning (ML) algorithms.

For instance, ML and computer vision are used by self-driving cars to understand the actual roads and potential barriers in a typical commute. This has already made significant strides.

The term “Internet of Things” describes a network of items that are equipped with sensors and software that can easily communicate with devices all around them. The popularity of smart homes has already been rising. Users can converse with their devices by speaking to virtual assistants powered by AI, such as Alexa on the Amazon Echo, and Google Assistant.

With Web 4.0, smart cities and fully AI-powered infrastructure systems may become more common.

Use Cases for Web 4.0

When we can use the power of machines to augment our thoughts, we have the potential to transform entire industries. Here are some prospective Web 4.0 applications that might be available to you sooner than you think.

Medicine

Brain-computer interfaces (BCIs) are now being explored with the hope of assisting those with neurological disorders and similar limitations. Soon, BCIs might help speech synthesis or assist amputees in controlling prosthetic limbs.

Education and Work

The public might adopt Web 4.0 if technology improves. One day, BCIs might be employed to boost learning effectiveness or work performance. Consider a program that understands how to explain a subject specific to you and how to test you.

Because of the pandemic, remote work has become more widespread, and the breakthroughs we could make with virtual reality could alter how we operate.

Security

Pass thoughts is an alternative method of logging in to your preferred programs that is being tested by several researchers. Soon, we might be able to access our devices simply by thinking about doing so, which would be even more secure than the biometrics we currently use.

Social Media

A potential new alternate location for social gatherings is the Web 4.0 metaverse.

Connecting digitally with co-workers, friends, and family will be necessary as remote learning and remote work gain popularity. People from all over the world will be able to connect and communicate as though they were all present in the same physical area, thanks to VR and AR.

The Challenges Facing Web 4.0

As with any new technology, there are several challenges and risks associated with Web 4.0. 

The use of malicious programs will still be possible with brain-computer connections. These interfaces could be targets for hacking since they hold very private information about the user.

The technology used in virtual reality is also not entirely secure. Some accounts say using VR for an extended period harms the user’s vision or may even cause seizures. A fully immersive digital experience won’t be welcomed into the masses until such safety concerns are addressed.

Data ownership becomes hazy when technologies practically become a part of us. BCIs could turn into means for tech corporations to make money off your data.

There was public uproar when Google acquired Fitbit in 2019 for $2.1 billion, giving them access to millions of consumers’ fitness data. We should be cautious about whether users are being exploited as the web becomes more user-friendly for us.

Closing Thoughts

The internet evolves as technology advances, taking into account the capabilities of modern innovation and the needs of its users.

People born after 1990 have never known a time before the internet. Children born today will never experience life without social media. The world without an enhanced mind may never be experienced by anybody born in the next ten to twenty years.

We have become more connected as the web has developed, but it has always come with a price. Future technologists and developers should be mindful of the security and privacy of their users. Whatever the outcome of these breakthroughs, one thing is sure: the future is not so far away. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

design and development by covio.fr