Autonomous Ride-Hailing

Uber and Lyft have changed the short-distance ride-hailing market once belonging to the local and usual handful of taxi companies. As they took over these new markets, they also changed the way we thought about travel.

Further, asking friends for a ride to the airport is beginning to disappear with the onset of autonomous vehicles. Several new companies are testing this out, and some are in full operation in limited areas with complete, autonomous ride-share services. We take a deep dive into the current state of the autonomous ride-hailing market. 

The Rise of Autonomous Vehicles

Autonomous technology is the next stage for the travel industry. The growing success of the electric vehicle set the tone, even if battery costs have a long way down to go. But it’s better to call this one door leading to many. 

For example, artificial intelligence will play a crucial role in the use of autonomous ride-hailing. We have: route optimization, accident prevention, and maximized utilization (keeping all vehicles active). Not only does this lower costs for companies entering this space, it dramatically improves urban efficiency. 

ARK Investment Research has predicted that the price of autonomous electric vehicle transportation will fall to $0.25 per mile by 2030.

These three factors will drive the cost of ride-hailing services. However, industrialized countries will also see a massive reduction in the cost per mile as labor makes up over 70% of the cost, which is followed by the vehicle itself, and its fuel and maintenance. ARK Research has estimated that the price per mile could be reduced by up to 88% for an autonomous ride-hail.

The autonomous ride-share total addressable market (TAM) is estimated to reach between $11 and $12 trillion for two key reasons. 

1.     High utilization rates. Electric autonomous vehicles can provide rides to clients 24 hours a day, only offline during charging and maintenance times.  

2.     Low operation costs. The cost of a ride-hail will drop to $0.25 due to several factors.  Accidents per mile driven by autonomous vehicles are already lower than by human drivers, and with more autonomous vehicles on the roads, this will drop further. Autonomous vehicles drive in a more efficient way, also reducing fuel costs up to 44% for passenger vehicles and 18% for trucks.  

Autonomous Ride-Share Programs

Cruise

Cruise is a subsidiary of General Motors and became the first company to begin an autonomous ride-hailing service in a major city. In June 2022, Cruise received approval from the California Public Utilities Commission and started its public, driverless, fared, autonomous ride-hailing. 

Cruise launched with a fleet of 30 autonomous all-electric Chevy Bolts. These small cars ferry passengers around many parts of the city, and the service is currently available daily from 10 p.m. to 6 a.m. (provided “normal” weather conditions).

Source: Cruise

Cruise vehicles are limited to a maximum of 30 mph and cannot operate if there is heavy rain, fog, smoke, hail, sleet, or snow. Cruise is looking to add more Chevrolet Bolts to its fleet and increase the time it’s allowed to operate. 

Since 2020 Cruise has delivered a total of 2.2 million meals to San Francisco’s needy through a partnership with local food banks. Cruise has also begun the groundwork for autonomous ride-hailing services to launch in Dubai in 2023 and later in Japan.

Baidu

Chinese Technology giant Baidu began its Autonomous Driving Unit (ADU) in 2014 to design vehicles that could move passengers without the need for a driver. Baidu launched its “Apollo Go” self-driving robo-taxi business in 2017, and they recently upped the ante with their Baidu Apollo RT6 Autonomous Driving Vehicle in July 2022. 

In that same month, they received approval from the Beijing authorities to launch a robo-taxi service within a Beijing suburb. The new Apollo RT6 has a detachable steering wheel because the car no longer needs a driver. 

Source: Baidu

In August 2022, Baidu also obtained the permits to operate a fully autonomous taxi service in two Chinese megacities, Wuhan (11 million residents) and Chongqing (30 million residents). Baidu’s 100% autonomous robo-taxi services will begin on a small scale with a fleet of only five vehicles in each city and provide their service in designated areas from 9:30 a.m. to 4:30 p.m..  

Source: Baidu

Pony.ai

Pony has also received permits from Beijing authorities to provide their fair-charging, driverless robo-taxi service in July 2022. With this new permit, they are now able to charge fares for rides within a 60 square kilometer area (23.1 sq miles) in Beijing’s Yizhuang suburb. 

The service area includes public facilities like underground stations, parks, and sporting centers, as well as key residential and business districts. The new permit builds upon two other recent Beijing autonomous vehicle milestones. Pony.ai was allowed to launch a robo-taxi service with safety drivers in November 2021. 

Source: pony.ai

Since November 2021, Pony.ai has provided over 80,000 rides from 200 pickup or drop-off locations.  And by July 2022, their robo-taxi service called “PonyPilot+” completed a total of 900,000 orders with nearly 80% from repeat customers. Further, 99% of the passengers provided positive reviews once the trip was complete, with an average 4.9-star rating on a 5-point scale. 

Hyundai Motors

Korean automaker Hyundai launched is RoboRIde autonomous ride-hail service in Gangnam Seoul. The South Korean Land, Infrastructure, and Tourism Ministry issued Hyundai with permits to operate their autonomous vehicles in Seoul. 

The Seoul Metro Government established a system that connects traffic signals with autonomous vehicles. This system also supports autonomous vehicles with remote functions, such as lane changing under circumstances where fully autonomous driving is not feasible. 

Hyundai has been testing autonomous driving in Gangnam since 2019. The program so far includes only two self-driving IONIQ-5 vehicles, operating from Monday to Friday from 10 a.m. to 4 p.m. with up to three passengers. The program is slated to expand to the general public after successful tests. 

Source: SAE

Waymo One

The autonomous ride-hailing service from Alphabet (Google) started as the Google Car and has been running autonomous rides in the Phoenix metro area. It has recently expanded its program from the east valley suburbs, where it’s charging fares, to a new pilot program in central Phoenix. 

Both services run 24 hours a day, seven days a week. In their 2021 safety report, Waymo states that they have driven millions of miles on public roads in their ten years of service and, with simulations, have completed billions of driving miles.  

Source: Waymo

Closing Thoughts

As the number of autonomous vehicle ride-hailing projects increases, we will become increasingly used to the idea. The number of miles driven (both actual and virtual) will continue to grow, and as this happens, the insurance industry will begin to push toward autonomous driving. 

For the U.S.A. and other industrialized countries, the driving costs are high for human-driven vehicles. Economics alone will push for autonomy. The benefits of optimized fuel use and reduced traffic will continuously argue in favor of autonomous driving. We will soon all be passengers.

Disclaimer: The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

How Digital Wallets Transform Banking

We have an economy that is switching to 5G, and in a few more years, we may see 6G speeds. However, the global economy’s digital transformation is far from complete. After our struggles with Covid-19 affecting both health and commerce, we have moved our world toward global, digital connectivity. For example, digital wallets will forever transform the way that we bank, shop and pay. 

Most of the world, including developed economies, is still only in the early stages of a true digital transformation. We will look at the future of digital wallets and see how they are going to be an integral part of the comprehensive digital potential that is coming to all of us. The new connected economy will be defined by several pillars, all affecting our daily lives. 

Digital Wallets

Digital wallets allow their owners to store and spend funds digitally in the form of “real” money linked to a debit, credit, gift card, coupons, or loyalty points. Digital wallets differ from other online payments because they allow the user to save payment information by adding their card or account information to the app. When payment is required, the buyer can do it straight from the app, only needing to hold the smartphone close to the reader, and not having to remember or enter payment credentials.

This is only the start of digital wallet capabilities. Digital wallets can do much more, from adding loyalty cards, airline boarding passes, movie tickets, hotel door keys, and more. The recent growth of this technology has allowed many to leave bulky wallets behind and has pushed our economy toward cashless payments. 

Apple, Samsung, and Google have all integrated these wallets into their devices and have become the biggest players in the space. Retailers like Walmart and Alibaba have added digital wallet capabilities to their checkouts, and PayPal, Cash App, and Venmo, which offer digital wallet services, have grown into financial powerhouses.  

Banking’s Future

Beyond the convenience digital wallets provide at checkout, they can potentially solve the cross-border banking problem, a difficult-to-navigate and disjointed process. Opening an international bank account is often long and painful, and international transfers can add more roadblocks and delays lasting days or more. 

New Fintech firms allow businesses to open their own international accounts with multicurrency IBAN in the organization’s name. Virtual wallets then make the process easier with same-day payments, while the company can keep funds in multiple currencies allowing for prompt payments and currency exchange.  

The Technology of Digital Wallets

Digital wallets start with a digital core. This is obviously the foundation behind the digital transformation of banking. And this digital core refers to the applications and platforms a financial institution utilizes in its transition to a digital business. 

It then uses open APIs (application programming interfaces) to integrate payment platforms and digital wallets, which bring front-end benefits to its consumers. With these fundamentals, institutions can build services that effectively and directly communicate to clients, driving transformational change. There are already many popular crypto wallets in Europe, Asia, and the Americas–nearly the whole world. 

Beyond the open APIs, we will see more smart ledgers and wallet management programs come forward.  These blockchain-based smart ledgers will transform the handling of digital wallets. Offering a way to record, transfer, and store alternative assets in token form, adding to digital wallet capabilities. When combined with API-accessible wallet management, users will experience a fully integrated digital payment model within a single platform. 

Crypto’s Potential

The rise of cryptocurrencies is still considered an untapped frontier of digital wallets. Trading these non-tangible digital currencies has increased, and the price of a bitcoin has risen from $1 in 2011 to tens of thousands today. Remaining speculative means that crypto is ripe for continued growth, and the push for CBDCs means that the banking sector is concerned. It’s even possible to use APIs for algorithmic trading.  

Visa is hedging its bet, building the structures for CBDC integration and for its own crypto digital wallet. This institutional interest and strong demand across wealth management are apparent, and there is a significant blockchain product offering that has the potential to transform the way markets behave. The blockchain value proposition has shifted to what else a blockchain can do beyond store value.  

Digital Wallets Connect Economies

In a report about the connected economy by Stripe and PYMNTS, which surveyed over 15,000 participants from 11 countries, the ongoing digital transformation has only reached about a quarter of its full potential across those studied. These 11 countries represent about 500 million adults, a small portion of our now 8 billion global population.  

Source: PAYMENTS

Brazil and other developing countries have massive potential to grow their connected stature. But even in highly connected places such as Spain, the UK, and Singapore, only about one-third of their digital connectedness has been achieved. The untapped potential hints that there are roadblocks to be overcome and transformation to be had. 

Streaming and Social Media

On average, the survey found that 87% of respondents were connected to the internet. However, fewer than 20% were highly engaged with digital activities, especially shopping. This is an interesting, ironic result of the slowing but persistent pandemic. However, streaming services are the exception. 

The research found that seven times as many consumers are engaged in watching streaming videos daily on YouTube, HBO or Netflix as are shopping on a marketplace like Amazon, Etsy, or eBay. Social media is the other plus point, with five times as many consumers checking their social media as compared to ordering food.   

Digital Wallet Use Is Here to Stay

Digital wallets are the key to this connected future. Covid-19 brought a growing embrace of touchless or contactless payments, speeding up digital wallet adoption. There is no clear digital wallet leader, and use patterns differ based on geography.

PayPal is commonly used in the most digital wallet-centric nation, Germany, accounting for 37% of all online transactions. More than 40% of all domestic online transactions in Germany are using digital wallets, with 84% of these using PayPal.

Sources: Stripe and PYMNTS

In 2019, mobile wallets surpassed credit card use globally, becoming the most widely used payment type.

Juniper Research predicts that the number of unique digital wallet users will grow from the current 2.6 billion to 4.4 billion by 2025. China and India will lead the way, accounting for nearly 70% of all digital wallet transactions, with the US and UK lagging in digital wallet adoption. 

Digital wallets have been successful in areas with low card penetration but high phone use. Southeast Asian consumers skipped cards, going from cash to mobile wallets, and digital wallet providers have done exceptionally well. 

With this adoption of digital wallets and newer forms of digital currency, cryptocurrencies or CBDCs will be in demand. Future digital wallets will seamlessly store and pay in several currencies, particularly as many retail online brokerages offer crypto and checking accounts. 

Closing Thoughts

As we become digitally connected, digital wallets play an obvious, necessary role. Their reach will spread, and governments and companies will push for their continued use. The increase in services they will supply, solving cross-border transaction issues, and improving the ease of banking will ensure that we use our digital wallets when we bank, shop, and pay. 

Other services should look to digital streaming and social media to see how we can better integrate digital payments and digital connectedness into our lives. China and India will continue to lead this march, but that doesn’t mean the West shouldn’t catch up quickly. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AI in Agriculture 

Artificial intelligence, drones, and robots are already being deployed on large farms to assist with several farm management tasks for crops and livestock. However, there are some risks that must be accounted for when turning over our food production to AI-driven machines. 

We will discuss the benefits that AI can bring to the world of agriculture, including some applications that are already in place to help our farmers produce more and better-quality food. We will then discuss some potential pitfalls we must be aware of if we turn over our food supply to machines. 

AI’s Potential

AI has brought to the world countless tools for personal and industrial use. With agriculture, it has delivered the potential to increase yields, keep pests away, and reduce costs in nearly all parts of farm management. 

Our farmers need to know how best to use these tools, and we need to understand how their application can be a benefit. There are already AI applications that are worthwhile and are providing users with successful results. Let us see how the grass is greener on the AI side.

The Smart Farm

AI is leading to smart farms with farming models that have high cognitive ability.  This technology is focused on a few specific areas.

Data and Analysis

With new equipment, farms can be set up to track and analyze multiple data points. For example, a farmer can use a drone to review a large tract of land and identify the exact location of a pest infestation or plant disease in real-time. This mass of data has boosted information accuracy and can help farmers make informed decisions when analyzed with AI models.

Robotics and Automation

Robots are used for farm activities such as picking, thinning, and sorting to speed up manual labor work and deal with any labor shortages. The goal is to increase productivity, consistency, and quality while minimizing errors.

Predictions

AI models have been designed to predict changes to weather patterns, soil erosion, and pest infestations to improve farm management and planning. These tools allow farmers to see into the future, assisting them with informed decision-making.  

Like other industries, agriculture faces similar constraints related to its use of AI, such as compatibility with current technology, resource availability, security, and potential regulatory issues. Even with these constraints, the future farms will be highly dependent on AI, making them more precise and creating a new “cognitive farm.” 

Digital Farmers

AI is revolutionizing one of our oldest industries and giving farmers multiple ways to produce more abundant harvests in all parts of the world. With this transformation, farms will now require digital farmers, men and women, which can push forward these technological changes, managing future farms in new ways.  

Tools and People

New farm managers must understand and use the correct tools to their farm’s benefit. While extensive technical knowledge is not needed, understanding the basic principles behind the technology and, more importantly, the technology’s operational implications are necessary.  Through AI, farm managers can better understand the inner workings of their farms.

The changing technology means that farm talent must be updated. Beyond the typical farming roles, farms will require employees with technological skills. The entire organization will need defined education to stay on top of the AI farming future.  

New Ways of Farming

Farmers will need to leave their comfort zones and explore new collaborative opportunities. This change will involve collaboration with new companies to obtain cutting-edge technologies that will allow a farm to acquire a competitive advantage and boost productivity. These partnerships provide inimitable technologies, giving farmers the upper hand, but these technologies work best for large farms.  

Cost advantages are most significant with economies of scale.  So, managers will benefit by finding strength in numbers.  AI tools can be expensive, beyond the reach of the small farm.  When collaborating with other farms, cooperatives, suppliers, universities, local communities, and the government, these costs can be driven down. 

AI’s Current Applications

AI currently monitors soil, detects pests, determines diseases, and applies intelligent spraying. Here are a few of the current applications farmers are already using today. 

Crop Monitoring

Crop health relies on micro and macronutrients in the soil to produce yields but with quantity and quality. Once the crops are planted, monitoring their growth to optimize production is also needed. Understanding the interaction between growth and the environment is vital to adjust for healthy crops. Traditionally this was done through human observation and experience, but this method is neither accurate nor speedy. 

Now drones capture aerial data, then train computer models to intelligently monitor crops and soil. This AI system can use the collected data to:

  • Track the health of the crops
  • Accurately predict yields
  • Identify crop malnutrition

This can all be done faster than a human could, in real-time, providing farmers with specific problem areas so they can take immediate actions to prevent problems before they grow.  

Determining Crop Maturity

Wheat head growth is a labor-intensive process that can be aided with AI. Over a three-year period, researchers collected wheat head images at different stages with different lighting, building a two-step wheat ear detection system. The AI model was able to outperform human observation, allowing farmers not to have to make daily visits to fields to check on the crops.  

Similarly, tomato ripeness has been determined with AI. 

A different study examined how well AI can detect maturity in tomatoes.  The researchers built a model looking at the color of five different parts of a tomato, then made maturity estimates.  The algorithm could correctly classify tomatoes with a 99.31% accuracy. 

Generally, evaluating soil involves digging up samples and sending them to the lab for analysis. AI researchers have used image data from a cheap microscope to train their model to do the same task. The model was able to make sand content and soil organic matter estimates with accuracy similar to costly and slower lab analyses. 

Disease and Insect Detection

Using deep learning, farmers are now automating the detection of plant diseases and pests.  This is done through image classification and segmentation. 

Source: V7 labs

A study looked at the apple black rot and used a deep neural network AI model to identify the four stages of disease severity. Like with the other models above, the disease identification process is labor-intensive. This project was able to identify the disease severity at an accuracy of 90.4%.  

Similarly, a different study was able to use the YOLO v3 algorithm and was able to identify multiple pests and diseases on tomato plants. Using only a digital camera and smartphone, researchers identified twelve different cases of disease or pests. Once trained, it was able to detect problems with an accuracy of 92.39%, taking only 20.39 milliseconds. 

Source: Frontiers In

Another study used sticky traps to collect six flying insects and collect images. They then based the course counting on object detection and fine-counting results. The model identified bees, mosquitoes, moths, flies, chafers, and fruit flies with a 90.18% accuracy and a 92.5% counting accuracy.  

Livestock Monitoring

Animals are a major component of our food system and need even more tracking than plants.  Companies are now offering tools to track cattle and chickens. CattleEye tracks and annotates key points for individual cows. 

Source: CattleEye

The system uses overhead cameras to monitor animal health and behavior, allowing a rancher to spot a problem and be notified without being next to the cow.  

By collecting data with cameras and drones, this kind of software is being used to count animals, detect disease, monitor birthing, and identify unusual behavior. It also confirms access to food and water. 

Smart Spraying

AI also prevents problems in the first place. Drones help with the spraying of fertilizer and pesticides uniformly across a field. They operate with high precision in real-time, spraying correctly and reducing contamination risk to animals, humans, and water resources.  

This is a growing field and is best performed by multiple drones, but intelligent spraying is getting better. Virginia Tech researchers developed a smart spray system that can detect weeds. 

A camera mounted on a sprayer records the geolocation of the weeds, analyzing their size, shape, and color, and then delivers a precise amount of herbicide. 

Source: Researchgate

The device’s accuracy prevents collateral damage to other crops in the environment.  

Risks of AI in Agriculture

All these different AI applications will help us monitor and improve our food systems, helping feed the 2.4 billion people suffering from food insecurity. AI can reduce labor inefficiency and increase reliability. However, there are some cautionary tales. 

According to a release by Asaf Tzachor of Cambridge University, there could be flaws in the agricultural data, emphasizing productivity over environmental concern. This focus could lead to errors that cause over-fertilization and pesticide use, improper irrigation, and soil erosion.  These factors must also be considered when designing AI systems. Inadvertent changes resulting in crop failures could result in massive food insecurity.  

Cybersecurity is a second issue. Cyberattacks could disrupt entire food systems, especially for farms that rely heavily on AI.

Finally, those without access to the new technology could be cut out of markets. Big farmers will profit, and small farms will be locked out of the gains entirely if they cannot afford the AI infrastructure. 

Planning Ahead

As in all enterprises, diligence and conscientious planning contribute to farming success.  Farmers must plan their AI strategy by optimizing their operations and yield requires thoughtful assessment. This planning involves a thorough review of priorities and a clear implementation plan.  

AI provides tools that can boost a farm’s yields, and transform the industry. Increases in agricultural production on a large scale will impact a country’s GDP, increase food security, and positively impact the environment. The US had just over two million farms in 2021, averaging 445 acres each, totaling 89.5 million across the country.  

Analytics and robotics boosts production on almost any farm. AI-related productivity gains can reshape the farming business and improve our global food supply. This is a way we can counteract the climate factors that could affect corn, rice, soy, and wheat production by 20-49%.

Closing Thoughts

Since the advent of agriculture, technology has improved its efficiency. From plows and irrigation to tractors and AI, we have moved forward to feed our growing population. With the ongoing changes to our climate, AI has arrived just in time to save us all from potential food insecurity. We must use AI to increase efficiency and reduce food production costs while also improving environmental sustainability. Doing so can make our farmers “smarter” and give us more and healthier foods.  

If small farmers can work together and take full advantage of these new AI tools, they can compete with large industrial farms. We also have to ensure that the systems that are put into place are safe and have an all-encompassing view that does not only focus on yields but the potential environmental effects. Sustainability remains crucial, and AI is the missing piece. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Spotting Deepfakes

A deepfake is a piece of image, audio, or video content using artificial intelligence to create a digital representation by replacing the likeness of one person with another. This advanced technology is becoming more common and convincing, leading to misleading news and counterfeit videos. 

We will delve deeper into deepfakes, discuss how deepfakes are created, why there are concerns about their growing prevalence, and how best to detect them so as not to be fooled into believing their content.  

Rise of the Machines

Advances in computers have allowed them to become increasingly better at simulating reality. What was once done taking days in the darkroom can be done in seconds using photoshop. For example, five pictures of the Cottingley Fairies tricked the world in 1917.  

Modern cinema now relies on computer-generated characters, scenery, and sets, replacing the far-flung locations and time-consuming prop-making that were once an industry staple.  

Source: The Things

The quality has become so good that many cannot distinguish between CGI and reality.

Deepfakes are the latest iteration in computer imagery, created using specific artificial technology techniques that were once very advanced but are beginning to enter the consumer space and will soon be accessible to all.  

What Are Deepfakes?

The term deepfake was coined from the underlying technology behind them, deep learning, a specific field of Artificial Intelligence (AI) or machine learning. Deep learning algorithms have the ability to teach themselves how to solve problems better, and this ability improves the more extensive the training data set provided to them. Their application to deepfakes makes them capable of swapping faces in video and other digital media, allowing for realistic looking but 100% fake media to be produced.  

While many methods can be applied to create deepfakes, the most common is through the use of deep neural networks (DNNs). These DNNs use autoencoders that incorporate a face-swapping technique. The process starts with a target video that is used as the basis of the deepfake (on the left above) and from there, a collection of video clips of the person (Tom Cruise) that you wish to overlay into each frame of the target video.

The target video and the clips used to produce the deepfake can be completely unrelated. The target could be a sports scene or a Hollywood feature, and the person’s videos to insert could be a collection of random YouTube clips.

The deep learning autoencoder is an artificial intelligence program tasked with selecting YouTube clips to understand how the person looks from several angles, accounting for different facial patterns and environmental conditions. It will then map that person into each target video frame to make it look original. 

An additional machine learning technique called Generative Adversarial Networks or GANs is added to the mix, which detects any flaws and improves the deepfake through multiple iterations. GANs are themselves another method used to create deepfakes. They rely on large amounts of data to learn how to create new examples that mimic the real target. With sufficient data, they can produce incredibly accurate fakes.  

Deepfake Apps

Deepfake apps have also hit the consumer market, such as Zao, FaceApp, DeepFace Lab, Face Swap, and the notorious and removed DeepNude–a particularly dangerous app that generated fake nude images of women.

Several other versions of deepfake software that have varying levels of results can be found on the software development open-source community GitHub. Some of these apps can be used purely for entertainment purposes. However, others are much more likely to be maliciously exploited.

How Are Deepfakes Being Used?

While the ability to swap faces quickly and automatically with an app and create a credible video has some interesting benign applications, in Instagram posts and movie production, deepfakes are obviously dangerous. Sadly, one of the first real-world deepfake applications was in the creation of synthetic pornography. 

Revenge Porn

2017 saw a Reddit user named “deepfakes” create a forum for porn featuring face-swapped actors.  Since then, the genre of “revenge porn” has repeatedly made the news. These deepfake use cases have severely damaged the reputations of celebrities, prominent figures, and even regular people.  According to a 2019 Deeptrace report, pornography constituted 96% of deepfake videos found online, and this has only dropped to 95% in 2022.  

Political Manipulation

Deepfakes have already been employed in political manipulation. Starting in 2018, for example, a Belgian political party released a video of, at the time, President Donald Trump giving a speech that called on Belgium to withdraw from the Paris climate agreement. The former president Trump never gave that speech. It was a deepfake. 

The Trump video was far from the first deepfake created to mislead, and many tech-savvy political experts are bracing for the future wave of fake news featuring convincingly realistic deepfakes. We have been fortunate not to have so many of them during the 2022 midterms, but 2024 may be a different story. They have, however, been used this year to change the course of the war in Ukraine.  

Non-Video Deepfakes

Just as deepfake videos have taken off, their audio counterparts have also become a growing field with many applications. Realistic deepfake audio can be created with similar deep learning algorithms using samples of a few hours of the target voice. 

Once the model voice has been created, that person can say anything, such as the audio deepfake of Joe Rogan. This method has already been used to perpetrate fraud, and will likely be used again for other nefarious actions.

There are beneficial uses for this technology. It could be used as a form of voice replacement in medical applications, as well as in specific entertainment situations. If an actor was to die before the completion of the movie or before a sequel is started, their voice could be fabricated to complete lines that were not yet spoken. Game programmers can make characters who can say anything in real-time with the real voice rather than using a limited script recorded by the voice actor.  

Detecting Deepfakes

With deepfakes becoming ever more common, our society must collectively adapt to the spotting of deepfake videos in the same way that we have become attuned to detecting various kinds of fake news online. 

As is the case with all types of cyber security, there is a cat-and-mouse game where a new deepfake technology must emerge before a relevant countermeasure is created. This process is a vicious cycle, like with computer viruses, which is an ongoing challenge to avoiding the harm that can be done.

Deepfake Indicators

There are a few tell-tale giveaways that help in spotting a deepfake.

The earlier generation of deepfakes were not very good at animating faces, and the resulting video felt unnatural and obvious. However, after the University of Albany released its blinking abnormality research, newer deepfakes have incorporated natural blinking into their software–eliminating this problem.

Second, look for unnatural lighting. The deep fake’s algorithm will often retain the illumination of the provided clips that were used to create the fake video’s model. This results in a lighting mismatch. 

Unless the audio is also created with a deep fake audio component, it also might not match the speech pattern of the person that is the target. The video and the audio may look out of sync unless both have been painstakingly manipulated.  

Fighting Deepfakes Using Technology

Even though the quality of deepfakes continues to improve and appear more realistic with technical innovation, we are not defenseless to them. 

Sensity, a company that helps verify IDs for KYC applications, has a deepfake detection platform that resembles an antivirus alert system.  

The user is alerted when they are viewing content that has signs of AI-generated media. Sensity’s system uses the same deep learning software to detect as is used to create the deepfake videos.  

Operation Minerva uses a more straightforward approach to identifying and combating deepfakes.  They employ a method of digital fingerprinting and content identification to locate videos made without the target’s consent. It can identify examples of deepfakes, including revenge porn, and if identified, it will send a takedown notice to sites that Operation Minerva polices. 

There was also a Deepfake Detection Challenge by Kaggle, sponsored by AWS, Facebook, Microsoft, and the Partnership on AI’s Media Integrity Steering Committee. This challenge was an open, collaborative initiative to build new ways of detecting deepfakes. The prizes ranged up to a half million dollars.  

Closing Thoughts

The advent of deepfakes has made the unreal seem real. The quality of deepfakes is improving and combating them will be more problematic as the technology evolves. 

We must remain diligent in finding these synthetic clips that can seem so real. They have their place if used for beneficial reasons, such as in entertainment and gaming, or med-tech to help people regain speech. However, the damage they can do on personal, financial, and even social levels has the potential to be catastrophic. Responsible innovation is vital to lasting success.

Disclaimer: The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

​​Data and Machines of the Future

As we move toward our future, we increasingly notice two concepts that have always been at odds: data and computing power. This rift runs as follows: we have more data than we can process, while much of that data remains subpar for processing. 

Data and computing power have been such that the data we have has always been more than the data we can process, and the data we have is not always the best data to be processing. We are reaching the point where these two issues are starting to blur.

First, we are creating computers that have the ability to process the vast amounts of data that we are now creating. Second, we are creating synthetic data that may not be “real.” However, if it’s “authentic,” the users may prefer it. Let’s discuss these two topics and how they will interact in the future.  

Rise of the Machines

A new class of computers is emerging that stretches the boundaries of problems that they have the capability to solve. These new devices from three defined areas are pushing aside the limits of Moore’s Law and creating a new computing capability curve. 

Companies and industries have always been defined by their limitations, the currently unsolvable problems. However, these new machines may help companies solve and move beyond the presently unsolvable.

These ongoing challenges define the boundaries of companies and their core products, services, and overall strategies at any time. For decades, the financial services industry has operated under the assumption that predicting the movement of the stock market and accurately modeling market risk is either hard or impossible to do, but it may not be in the near future.  

When combined, there are emerging technologies that can potentially make these core challenges achievable. With quantum computing as the next level of problem-solving, combined with high-performance computers (HPCs) or massive parallel processing supercomputers (MPPSCs), the ability to use never-before-seen swaths of data becomes possible. 

As business leaders, we must create partnerships and inroads to understanding the latest technological developments in the computing field and in our industry at large. This creative process includes experimentation and the design of a skills pipeline that will lead to future success.  

New Data Types

With the increases in chatbots, augmented reality (AR), and synthetic data (including deep fake audio, images, and video), we are forced to evaluate what is “real” and what is not. When we see news of the latest global issue, we want to know that it is real, but do we care if the newest advertisement for our favorite snack is? 

We may even prefer the unreal. Say we are discussing a sensitive health issue with a synthetic (i.e. AR) nurse or we are training an AI using synthesized data that is designed to remove historical discrimination–unreal may be the preference. 

As technology progresses, we will shift from a desire for the real to a desire for the authentic, and authenticity is defined by four foundational measures:

1.     Provenance. What is the source of the data?

2.     Policy. How has the data been restricted?

3.     People. Who is responsible for the data?

4.     Purpose. What is the data trying to accomplish?

Synthetic data aims to correct data bias, protect data privacy, and make AI algorithms more fair and secure. Synthetic content helps design more seamless experiences and provide novel interactions with AI that saves on time, energy, and reduced costs. However, the use of synthetic data will be complex and controversial.  

Data and Computing

High performance is a growing necessity. IDC reported that 64.2 zettabytes (ZB) of data was created or replicated in 2020, and this is expected to triple to 180ZB by 2025. Only 10.6% of the 2020 data was useful for analysis; and of that, only 44% was used.  

The answer to dealing with this massive issue is through high-performance computing (HPC), and while the field of HPC is not new, the computing potential has expanded. The smartphones of today contain the processing power of supercomputers three decades ago. 

Now, GPUS, ASICs, and other purpose-built processors, like D1 Dojo chips specifically for computer vision neural networks, and which will be the foundation of autonomous driving tech, are pushing HPC capability to new levels.

The Unreal World

Another issue for the future is the unreal. Dealing with a call center that has a bot that does not understand your request is maddening. But the potential for AI and its use is already becoming indispensable in business. It constantly improves, and what was once a “differentiator” for businesses has now become a necessity

Synthetic data is being used for AI model training in cases where real-world data cannot apply. This “realish” yet unreal data can be shared, protecting confidentiality and privacy while maintaining statistical properties. Further, synthetic data can counter human-born biases to increase diversity.

However, synthetic data comes with significant challenges. The world of deep fakes and disinformation is causing predictable damage, and the use of AI algorithms in social media creates echo chambers, filter bubbles, and algorithmic confounding that can reinforce false narratives. 

New Computing Technologies

Quantum Computing

While HPCs and supercomputers are able to process more data, they’re simply new versions of the same old stuff. 

The next generation of computer evolution will likely be when quantum computers begin to solve problems that we consider obdurate. Quantum research is still in its infancy but is likely to follow an exponential curve. 

The estimated number of qubits needed to crack the current level of cybersecurity is several thousand, and the devices that are being designed by IBM and Google have reached an announced 127 while others have claimed to reach 256 qubits. However, this is up from 53 for IBM and 56 for Google in 2019. 

A doubling every two years sounds like Moore’s law. However, Moore’s law is not the same for quantum computing. Qubits’ property of entanglement means that by adding one more qubit to a quantum system, you double the information the quantum system can compute. The move from 53 to 127 means computing power has doubled 74 times in just three years.  

Mimicking and Using Nature

The other technology that is reshaping computing is taking lessons from nature. Biology-inspired computing takes its ideas from a 3.7-billion-year-old system. There are two subclasses of biocomputing:

1.     Biomimicry, or computing systems that draw their inspiration from biological processes.

2.     Biocomputing, or systems that use biological processes to conduct computational functions.  

Biomimicry systems have been used in chip architectures and data science algorithms.  However, we are now beginning to see machines that are not only mimicking biological operations but are leveraging biological operations and processes. 

Data storage is a biocomputing darling for a good reason. Based on natural DNA found on Earth, one estimate predicts that an exabyte of data (1 million Terabytes) could be stored in one cubic centimeter of space and has the potential to persist for over 700,000 years.  

Moving Forward

How do businesses incorporate new forms of data and new ways of computing into practice? 

The first action is to begin evaluating how these different technologies will shape the industry and your operations. What are the problems that are considered a cost of doing business, and what would change if these problems could be solved? How can synthetic data be used to improve your current business functions, and what things need to be looked out for that could have a negative impact? What kind of machines could affect your business first?

Those who desire to take an active role in shaping the future should consider what hardware can be used to solve the currently unsolvable.  

No matter the industry, the critical step of forging partnerships is essential. Most businesses have skills and capabilities that they can gain from such partnerships, and many industry problems require comprehensive scale collaboration.  

Alliances and partnerships formed today will be the outliers of the industry tomorrow.

Closing Thoughts

We have always been defined by our unanswerable questions, and the advent of computers has helped us to solve grand challenges. We are also facing a synthetic, unreal world that is intended to improve our lives, but–depending on the user’s intent, such data and its progeny can be a tool of malicious intent.

Both of these concepts are at the point where business leaders must consider them no longer abstract. They’re rapidly improving, and their impacts on industries will be profound in the coming decade. The unsolvable will be a thing of the past, and what we believe is real will come into question.  

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

The Web3 Race

The race to create the future internet, Web3, is heating up daily. Providers are competing against each other, demonstrating a healthy, expanding, and decentralized Web3 ecosystem to come.  

The Basics

Users interact through various open-source applications such as MetaMask, Web3 gaming, the metaverse, and DeFi protocols. However, they don’t usually stop and think about what happens behind the Web3 scenes, or what is piecing the blockchain-based web together. If we imagine Web3 as a burgeoning new metropolis, it’s the providers of the underlying infrastructure and power grid that make all these operations possible.  

Every Dapp relies on communication with one or more blockchains. Daily, full communication nodes serve billions of requests from Dapps to read and write data onto a blockchain. We require a massive node infrastructure to keep up with the ever-expanding Dapp ecosystem.

However, the running of nodes is both time- and capital-intensive. Dapp builders must turn to external providers for remote access to nodes. This requirement results in extreme monetary incentives for infrastructure providers to serve as many of the Web3 ecosystems as possible. But, who are the ones winning this race?  

The Problem of Centralization

The most expeditious way to provide the reliable infrastructure that can power Dapp ecosystems is for centralized companies to set up a web of blockchain nodes, which would commonly be held in data centers such as those of Amazon Web Services (AWS). They would allow the developers to access these from anywhere for a subscription. 

This system is precisely the method that a few players in the Web3 space did, but it resulted in centralization, which is against the ideals of the self-named decentralized space. 

Centralization is a significant issue for the Web3 economy because centralization means that first, the ecosystem becomes susceptible to 51% attacks, and second is at the mercy of a few powerful players.  

Let’s consider that 81% of Ethereum beacon chain nodes are located in the United States and Europe. Additionally, if the three largest mining pools were to come together, they could conduct a 51% attack on the Ethereum network. Today’s blockchains are less distributed and more centralized than we think them or would like them to be. This structure starkly contrasts the vision expounded by Satoshi Nakamoto’s Bitcoin white paper.  

Courtesy of bitpanda

If the large node providers were to collude, then the advantages Web3 has over Web2 would be lost.  What’s more, the reliance that users would have on centralized providers can increase the chances of system outages. The Ethereum outage that occurred in 2020 due to Infura, one of the larger node providers, shows the problems of a centralized system. The outage caused several crypto exchanges, including MetaMask, Coinbase, and Binance, to suspend their withdrawals of Ethereum and ERC 20 tokens because the exchanges could not entirely rely on the Infura nodes.  

It must be noted that Amazon is often the backbone of these centralized providers. It has suffered from several past outages, which now creates a second, severe layer of vulnerability. 

The Infura outage was not the only such outage, with the Ethereum network’s move to Ethereum 2.0 or “The Merge.” The move to ETH 2.0 was interrupted by a 7-hour outage resulting from a single-node hardware failure on the network. A genuinely decentralized network would not have these types of worries. 

Solana’s Problem

Decentralization remains a crucial tenet of Web3 and its economy, and a centralized blockchain infrastructure is a threat capable of undermining it. The Solana blockchain has suffered through multiple outages, all due to a lack of decentralized nodes. The network was insufficient to handle a spike in traffic. 

Solana’s problem is common for many blockchains that are trying to scale their operations and throughput. Many of the top decentralized blockchain protocols continue to struggle with a pathway to scale while also being decentralized. The largest blockchains, Bitcoin and Ethereum, are steadfast in their part in the decentralized war, but ETH is still vulnerable. 

In the early days of blockchain, on June 8th, 2013, Feathercoin (FTC) was the victim of a 51% attack. One entity was able to control over half of the FTC network’s total processing power. This strategic move allowed them to reverse the confirmed transactions on the chain and even prevented new transactions from going forward. FTC has fallen into blockchain obscurity with a price that plummeted, alongside a delisting from all major exchanges.  

The reason for the ongoing centralization is due to the overreliance on Web2 cloud providers such as AWS and Infura, which have continued in their roles from Web2 and have provided the infrastructure for Web3 and its economy. However, the current strategy to avoid centralization and blockchain’s problematic “single point of failure” is gaining significant steam. This change is good news for the future of Web3 ecosystems that wish to remain healthy, secure, and decentralized.  

Better Solutions With Decentralized Infrastructure

Courtesy of Statista

With the advent of novel innovations, there is a rise to a new breed of decentralized provider. These node providers are running their services on-premises or even in users’ homes instead of relying on centralized cloud providers. 

The Architecture of Web2 vs. Web3

Courtesy of Coinbase Blog

The key advantage that decentralized nodes provide is that they cannot be taken down in the same way as a single point of failure. They can also provide faster connections for global users. Additionally, providers of decentralized node infrastructure create new economies, where these independent providers serve requests for data and earn rewards in their native tokens. 

Increasing Competition

Several providers in the decentralized Web3 space, such as Flux, Ankr, and QuickNode, compete for market share. This competition ensures that providers are consistently motivated to improve their services and provide the best possible user experience for their customers. Such a competitive environment is good for the Web3 economy because it leads to innovation while also lowering prices.  

Investors are seeing great returns acting as pooled node providers as well. Yieldnodes sets up decentralized nodes for investors and has paid 10% returns a month for over two years, with a high of over 19% in February 2021.  

Courtesy of Yieldnodes

What’s even more important is that competition for blockchain infrastructure results in a more decentralized Web3 economy. The more decentralized the network, the more resilient it is to censorship, and 51% attacks will remain an issue of the past. 

Closing Thoughts

The idea behind Web3 is not just to create a better internet but a better world. Decentralized infrastructure providers are building an internet foundation that is more equitable, censor-resistant, and secure. 

By shaking things up, they are supplanting the status quo of giant, centralized hosting providers that are carryovers from Web2 and which make blockchains more susceptible to attacks, outages, and censorship. The new decentralized providers are on the cutting edge and have an incentive to push innovation, providing their users with both the best possible service and the greatest level of integrity. 

Disclaimer:  The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees. This information should not be interpreted as an endorsement of cryptocurrency or any specific provider, service, or offering. It is not a recommendation to trade. 

NFTs and Deep Learning

Non-fungible tokens (NFTs) are becoming more popular by the day. According to DappRadar, the trade volume of NFTs in 2021 was $24.9 billion–over $95 million more than in 2020.

One of the most significant developments in the cryptocurrency ecosystem is the rise of non-fungible tokens. The initial generation of NFTs concentrates on developing the fundamental components of the NFT market’s infrastructure, including ownership representation, transfer, and automation.

Even the most basic kind of NFTs capture great value, but the hype in the industry makes it difficult to tell the difference between signal and noise. As the market develops, the value of NFTs should shift from static photos or text to more intelligent and dynamic collectables. The upcoming wave of NFTs is going to be heavily impacted by artificial intelligence (AI).

NFTs and AI

We need to know what AI disciplines cross with the current generation of NFTs to comprehend how intelligent NFTs can be created. NFTs are represented virtually using digital media, including photos, videos, text, and audio. These representations translate to several AI sub-disciplines amazingly well.

The “deep learning” branch of AI uses deep neural networks to generalize information from datasets. The concepts underpinning deep learning have been known since the 1970s. Still, in the last ten years, they have experienced a new boom thanks to various platforms and frameworks that have accelerated its widespread use. Deep learning can significantly impact a few critical areas, enabling the intelligence of NFTs.

Computer Vision

NFTs are mostly about pictures and videos nowadays, making them the ideal platforms for utilizing the latest developments in computer vision. Convolutional neural networks (CNN), generative adversarial neural networks (GAN), and transformers are approaches that have advanced computer vision in recent years. 

The next wave of NFT technologies can use image production, object identification, and scene understanding, amongst other computer vision techniques. It appears obvious to integrate computer vision with NFTs in the field of generative art.

James Allison, a Nobel Prize-winning cancer researcher, was the subject of an NFT that the University of California, Berkeley auctioned off on June 8 for more than US$50,000. Designers scanned faxes, handwritten notes, and legal documents related to Allison’s important findings filed with the university. Everyone may view this piece of art, titled The Fourth Pillar, online, and the team created an NFT to prove ownership.

Natural Language Processing

Language is the primary means through which cognition, including forms of ownership, may be expressed. Over the past ten years, some of the most significant advances in deep learning have been in natural language understanding (NLU). 

In NLU, methods like transformer powering models, or GPT-3s, have achieved new milestones. New versions of NFTs could benefit from research in fields like sentiment analysis, question answering, and summarization. The concept of adding language comprehension to NFTs in their current forms feels like a simple way to improve their usability and engagement.

For instance, Eponym, a program that enables the translation of words into art and the direct development of NFTs, was recently released by Art AI.

Voice Recognition

Speech intelligence is the third branch of deep learning that can immediately affect NFTs. The field of voice intelligence has recently evolved because of techniques like CNNs and Recurrent Neural Networks (RNNs). Attractive NFT designs may be powered by features like voice recognition or tone analysis. It should be no surprise that audio-NFTs appear to be the ideal application for speech intelligence techniques.

NFTs need voice AI because it enables people to connect with their digital collectables naturally. Voice AI, for instance, may be used to query an NFT or issue commands to it. In the future, NFTs will be even more important since they are now more dynamic and engaging. Platforms such as Enjin allow users to create music industry NFTs, which could be game-changing. 

The potential of NFTs is increased by language, vision, and voice intelligence improvements. The value unleashed at the point when AI and NFTs converge will influence several aspects of the NFT ecosystem. Three essential categories in the current NFT environment may be immediately reinvented by introducing AI capabilities.

Using AI to Generate NFTs

This aspect of the NFT ecosystem stands to gain the most from recent developments in AI technology. The experience for NFT creators may be enhanced to heights we haven’t seen before by utilizing deep learning techniques in areas like computer vision, language, and voice. Today, we can see this tendency in fields like generative art, but they are still very limited in terms of the AI techniques they employ and the use cases they address.

We should soon observe the usefulness of AI-generated NFTs to spread beyond generative art into other general NFT utility categories.

Digital artists like Refik Anadol, who are experimenting with cutting-edge deep learning techniques to develop NFTs, illustrate this value proposition. To produce astounding graphics, Anadol’s company trained models utilizing hundreds of millions of photos and audio snippets using techniques like GANs and quantum computing. 

Natively Embedding AI

Even if we can create NFTs using AI, they won’t necessarily be clever. But imagine if they were? Another commercial opportunity presented by the convergence of these two exciting technological phenomena is the native integration of AI capabilities into NFT. Imagine NFTs with language and speech skills that can interact with a specific environment, engage in a conversation with people, or respond to queries regarding their meaning. Here, platforms like Alethea AIand Fetch.ai are beginning to make headway.

NFT Infrastructures With AI

Building blocks like NFT markets, oracles, or NFT data platforms incorporating AI capabilities can provide the groundwork for gradually enabling intelligence across the whole ecosystem of NFTs. Consider NFT markets that utilize computer vision techniques to give consumers intelligent suggestions or NFT data APIs or oracles that provide intelligent indications from on-chain statistics. The market for NFTs will increasingly depend on data and intelligence APIs.

Closing Thoughts

AI is reshaping nearly every industry. By combining with AI, NFTs can go from simple, rudimentary forms of ownership to intelligent, self-evolving versions that allow for richer digital experiences and much greater forms of value for NFT creators and users. 

Smart NFT technology does not require any far-fetched technological innovation. The flexibility of NFT technologies combined with recent developments in computer vision, natural language comprehension, and voice analysis already provide an excellent environment for launching new innovations in the ever-growing digital asset space. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

NFT and Crypto in GameFi

The GameFi sector has been one of the main contributors to the explosive growth of the cryptocurrency market over the past few years.

Gamers can earn incentives while playing thanks to GameFi, a combination of the words “finance” and “gaming.”

With consistent growth, the market has a token market cap of almost $9.2 billion. Notably, despite the crypto winters, GameFi networks have endured and thrived. 

By 2031, the sector is expected to be worth $74.2 billion.

What Is GameFi?

GameFi is a platform that combines blockchain technology, non-fungible tokens (NFTs), and game mechanics to build a virtual world where users may interact and earn tokens.

Video games used to be stored on centralized servers, allowing publishers and creators complete control over everything in their games. This meant that none of the digital objects that players had gathered over many hours or even years of gaming belonged to them. 

Few of these objects had any use outside the game, ranging from avatars and virtual territories to weapons and clothes (sometimes referred to as “skins”). Therefore, there was no practical mechanism for users to get reimbursed for their online time or have access to the value of their acquired in-game goods without undertaking a professional gaming career.

Players earn in-game rewards by completing objectives and moving through different stages in GameFi games. These prizes, as opposed to conventional in-game money and equipment, have monetary worth outside of the gaming industry.

The industry has been dubbed “play-to-earn” because of gaming products given in the form of NFTs as “accomplishment tokens” that may be exchanged on NFT marketplaces or cryptocurrency exchanges.

Even though “play-to-earn” is the preferred terminology, participating in GameFi involves risk, including the possibility of incurring significant upfront fees that a player might lose.

How GameFi Works

There is an in-game currency, market, and token economy in almost all blockchain-based games. There is no centralized authority, in contrast to conventional games. Instead, the community generally manages and governs GameFi projects, and users participate in decision-making.

Although each GameFi project has its own unique mechanics and economics, they have similar characteristics:

  • Blockchain Technology: The distributed ledger of a blockchain powers GameFi initiatives. This maintains player ownership records and guarantees the openness of all transactions.
  • In contrast to traditional gaming, where players play to win, GameFi initiatives employ a P2E business model. By providing incentives with measurable values outside of the game, these games encourage players to play more. These incentives typically take the form of NFTs or in-game money.
  • Asset Ownership: In conventional gaming, in-game purchases are immutable investments that are trapped inside a particular game. Players own their tokenized in-game assets via P2E. In most cases, people can trade them for cryptocurrencies and, ultimately, money. On the blockchain, assets are tokenized and might include anything from a suit of armor to a piece of virtual real estate.
  • Decentralized finance (DeFi) solutions, such as yield farming, liquidity mining, and staking, may also be a part of many GameFi initiatives. These provide participants more ways to grow their token investments.

Decentraland, The Sandbox, Axie Infinity, and Gala are examples of well-known blockchain gaming networks that use the P2E GameFi architecture.

Axie Infinity

Consider the Ethereum-based game Axie Infinity, which gained popularity in 2021 and became the most Googled NFT worldwide in March 2022. Players in Axie Infinity gather, breed, train, and engage in combat with “Axies.” Each Axie may be exchanged on the game’s market for real money, unlike other in-game products (to give you an idea, the most expensive Axie ever sold was for US$820,000).

Axie Infinity Shards (AXS), which can be purchased and sold on exchanges like Crypto.com, and Smooth Love Potion (SLP), which users earn by playing the game, are the two native cryptocurrencies of the game. AXS is also utilized as a governance token, enabling token holders to decide how the gaming experience will evolve in the future.

Having said that, there may be a significant barrier to entry for games like Axie Infinity. User purchases of three pet characters are required to launch the game. A typical team setup used to cost roughly $300, although prices have recently decreased by approximately one-third. 

Despite the price decline, this initial cost remains a significant barrier for many, especially given that the vast majority of blockchain gaming players now come from underdeveloped nations. Due to this barrier, gaming guilds have emerged, which allow NFT owners to lend out in-game assets (NFTs) in exchange for a percentage of the assets created. 

GameFi Is Boosting Growth

Because GameFi initiatives use cryptocurrencies to settle transactions, the use of digital currencies has grown significantly in recent years.

The number of Unique Active Wallets (UAW) connected to the blockchain gaming industry increased significantly in the third quarter of 2021, according to a report recently released by DappRadar, a platform that monitors activities on decentralized applications (DApps). 

These wallets made up roughly 49% of the 1.54 million daily UAWs registered during that time. The information supports the sector-disruptive potential of GameFi and the rising use of cryptocurrencies, which in turn encourages their uptake and use.

Another study report on the subject was recently made public by Chainplay, an NFT game aggregation platform, and it showed that 75% of GameFi investors entered the cryptocurrency markets as a result of their engagement in GameFi, demonstrating GameFi’s expanding influence on crypto adoption.

In addition to an expanding crypto universe, and growing retail crypto exchanges, GameFi has significantly contributed to the NFT market expansion. NFTs are used more often on the blockchain since GameFi largely relies on them for in-game assets. The growth of the GameFi market in 2021 closely mirrored the NFT boom.

Sales of GameFi’s NFTs increased from $82 million in 2020 to $5.17 billion in 2021. 

Closing Thoughts

Because GameFi is a part of the cryptocurrency sector, it is also impacted by its many ups and downs. As a result, activity in the GameFi sector increases during uptrends while it declines during downtrends.

GameFi platform developers must work hard to create captivating games and help crypto ecosystems withstand market declines if they hope to keep users onboard. 

Although the endeavor is easier said than done, GameFi investors are now focusing on enhancing gaming experiences with the clear objective of sustainability.

There are many obstacles for developers to overcome, but if they can draw gamers in with excellent gameplay, the future of blockchain-based gaming is more than promising.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

The Future of NFTs in Web3 and Web4

The market for non-fungible tokens, or NFTs, was valued at $232 million in 2020, increasing to $22 billion in 2021. Because of its growing popularity in collectible trading and the developing significance of Decentralized Finance (DeFi), this market is anticipated to expand by around three times by 2031 as Web3 comes into being.

This article delves into why NFTs are an essential part of the future and how they fit into the evolution of the internet, known as Web3 and, eventually, Web4. 

What Are NFTs?

NFTs are digital coins that operate similarly to cryptocurrencies on a blockchain. But they differentiate from crypto tokens because each one is bespoke. This means they can grant “uniqueness” to other assets to which they are connected. Digital art has proven to be the most common initial use for NFTs, with pieces made by artists like Grimes and Beeple being well-liked with online collectors, frequently fetching high prices. 

NFTs are most frequently stored on the Ethereum blockchain. However, they may also be kept on other blockchains, including Polygon and Binance.

What Is Web3?

The phrase Web3 represents the notion of an innovative, improved internet. In essence, Web3 leverages blockchains, cryptocurrencies, and NFTs to return ownership and authority to the consumers. As put by a tweet from 2020: Web1 was read-only, Web2 is read-write, and Web3 will be read-write-own.

Some Web3’s core virtues are:

  • Decentralized. Ownership of the internet is distributed among its builders and users, rather than with a controlled entity. 
  • Inclusive. Everyone has equal access to participate.
  • Merit-driven. Web3 uses economic mechanisms and incentives rather than relying on third parties.

Let’s give a contextual example of how Web3 works. 

Web3 offers you control of your digital assets. Let’s take the scenario of playing a web2 game. An in-game item that you buy is linked to your account immediately. You will lose it if the game’s developers terminate your account, or if you quit the game. 

Direct ownership is possible with Web3, thanks to (NFTs). Nobody, not even the game designers, has the authority to revoke your ownership. Additionally, you may sell or trade your in-game possessions on open marketplaces to recuperate their worth if you decide to stop playing.

What Is Web4?

The semantic web, where computers instead of people will generate new information, is commonly seen as the result of “Web 3.0.” The Internet of Things (IoT), or a web of intelligent links, will be what we refer to as “Web 4.0.”

The core features of Web4 include: 

  • A hazy, blurred gap between man and machine
  • Information transmitting from every part of the worldArtificial intelligence capable of human-like communication
  • Completely transparency and traceability
  • Incredible speed and resilience

Everything around us is changing because of Web4, including the economy, logistics, and even medicine. The customer would have complete control over the internet and unbridled access to their activities and data. Web4 expands the potential of any internet-related sphere of activity by enabling a connection between man and machine.

NFTs in Web3 and Web4

Web4 will be synonymous with the digital economy and all digital assets. The impending metaverse will bridge the digital gap, making “tokenomics” seem like nothing. 

In this context, NFTs will prove key to online communities, events, exchangeable assets, digital identities, and more. They will also greatly add to the rising popularity of retail cryptocurrency trading, which has already produced its fair share of mavericks and winners. In essence, NFT technology protects the integrity of the growing digital asset space. 

NFTs have been utilized to grant exclusive access to offline events in addition to online groups and events. The permanent evidence of ownership provided by NFTs on the blockchain makes the technology well suited to address significant problems in the realm of event tickets, such as forging and digital theft.

NFTs have been integrated into blockchain games like DeFi Kingdoms, Axie Infinity, and Crabada, resulting in the development of thriving in-game economies where NFTs are valued according to their characteristics and statistics. In these games, playing more is highly rewarded since leveling up NFT assets increases profits and boosts the likelihood that uncommon and expensive item drops will occur. 

Concerned that someone could take your metaverse username? Through the Ethereum Name Service (ENS), NFTs have already made it possible for users to possess unique “.eth” Ethereum wallet addresses. The network is attracting a huge number of new addresses each day.

These unique addresses, which are NFTs, are linked to other decentralized services and make complicated wallet addresses more individualized and much simpler to remember.

Closing Thoughts

Usernames and wallet addresses are no longer the primary means of identifying assets in the metaverse–non-fungible tokens have taken their place. The Sandbox’s metaverse project already uses NFTs to represent virtual locations, furnishings, and other objects.

The Sandbox generated more than $24 million in revenue in March 2022 from selling NFTs representing real estate in the metaverse. Leading companies and well-known individuals from various industries, including Atari, Snoop Dogg, and the South China Morning Post, all own land in the metaverse.

NFTs are building the groundwork for digital communities, tradeable in-game items, and the greater metaverse economy while also revolutionizing the ownership and exchange of digital assets.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Ethereum Is Not Decentralized

With “the Merge” to Ethereum 2.0, the world’s second largest blockchain network by market cap has been the talk of the town. We have heard and read some postulating that Ethereum (ETH) can succeed even if it does not scale correctly

The ETH is more than its potential scalability. It stores billions, and eventually, trillions in value and does not need a central monetary authority or bank. However, the necessary key is increased liquidity and decreased volatility. 

Bitcoin (BTC) has followed a similar path to Ethereum, with many saying they wanted the BTC network to scale as a priority. They believe that Bitcoin’s success is measured as a rival for Visa, or it will fail, arguing that it must be a method of exchange rather than only a “Store of Value” like gold.  

This requirement results in a decentralized network in jeopardy, prone to censorship and government capture. Fortunately, even with the high cost of mining hardware, small block miners have been successful. At the same time, network scaling is happening through “layer-2” (outside the base blockchain layer) solutions and side chains like Liquid and the Lightning Network. And Bitcoin’s base layer can keep its role as an SoV while building its own exchange network.   

Ethereum is designed to be the world’s computer with unstoppable code that can run Dapps cheaply and trustlessly. But Ethereum has had to implement a major fix since the DOA hack, forgoing decentralization, and scalability was also jeopardized when the Dapp CryptoKitties broke the chain’s usability. We hope that Ethereum’s move to proof of stake will solve the throughput issues and lower the gas (transfer) fees for base-layer transactions which can be excessive.  

Source: YCharts

Since Ethereum’s Merge, the supply of new ETH is slowing. According to data from Ultra Sound Money, the Ethereum issuance rate has fallen by 98%. Though it has not become deflationary. At the end of September 2022, it is only sitting at 0.09% annualized growth per year with a total of 14,042,583 ETH currently in Ethereum staking contracts, totaling $18.7 billion.  

Ethereum Is Not Decentralized

Glassnode data shows that 85% of Ethereum’s total supply is held by entities that have 100 ETH or more, and 30% of the supply is in the hands of (wallets of) those with over 100,000 ETH.

Source: Glassnode

Ethereum’s centralization issue is even more apparent with the shift to proof of stake. Being a “staker” does not require the same hardware as proof of work, but a validator needs to have 32 ETH staked to participate, a sum that most cannot afford. Ethereum’s “Beacon Chain” validators illustrate how the PoS system will look. 

Most Beacon Chain validators are large entities, large exchanges, and newly founded staking providers with significant ETH holdings. A large portion of validators are legal entities registered in either the US or the EU, subject to those jurisdictions’ regulations.  

These centralized holdings mean that just under 69% of the total amount of ETH staked on the Beacon Chain is held by a mere 11 providers, with 60% staked by only four providers. A single provider, Lido, makes up 31% of the staked supply.  

Source: TheEylon

When there is a bull market like we saw until the first quarter of 2022, this amount of centralization generally goes unnoticed, but as the tide turns, uncertainty reveals such flaws.  

The potential for a proof of stake attack is just under 68%, and if the top 11 stakers were to collude, they could succeed in such a play. 

Source: Glassnode

No Really, DeFi Is Centralized Too

Decentralized finance was never really decentralized in the first place. The truth is that Ethereum’s not securing as much decentralized wealth as we think. Much of the value of Ethereum is in yield farming that results in high annualized yields, and in other Ethereum-based DeFi that worked until 2022’s crash. High yields had prevented major players from looking behind the curtains and finding the flaws. 

There was a massive growth in the “Total Value Locked” (TVL) of Ethereum, but since the crypto is not actually locked—it is temporarily deposited to capture those ridiculously high yields, hence dropping from $110B to $31B in only a year. 

Source: Defillama

Lack of Users

Likely fewer than 500,000 people have interacted with DeFi or with Ethereum. The user numbers for YCharts, DappRadar, DefiPulse, Etherscan, and Nansen, are all underwhelming.

Source: YCharts
Source: DappRadar

While the most valuable Ethereum-based DeFi coins have a small number of active users, it means that their fees are high enough to drive new users away from Ethereum. The highest valued DeFi protocols only have users in the thousands (OpenSea has only 26K daily users; see above). The reason that $31 billion is “locked” is the incentive of high “APY” (yield) liquidity mining. 

When users are being paid to borrow money from a DeFi protocol, you cannot consider any one user the same as a long-term holder. They can disappear quickly. 

The Solution

The solution for a centralized system is quite simple–make it more decentralized. Unfortunately, those currently working on Ethereum are rebuilding everything that is wrong with Wall Street and putting it on a blockchain. 

The deep-pocketed backers or in-the-know developers are pushing for centralization because they desire the most of the pie. As limited as Bitcoin is functionally, it is the most decentralized cryptocurrency with the prevalence of Bitcoin’s fractional shares. However, Ethereum has more potential and can be successful.

Staking

Staking was intended to remove the hardware requirements that kept the small player from participating. The requirement of a 32 ETH stake for participation in PoS is limiting. In October 2022, 32 ETH is about $44,000. 

If there are enough decentralized staking pools, then much of this issue can be resolved. By allowing many to invest through aggregating pools, the small players can finally join in. 

Layer-2 and Beyond

If the number of nodes (validators) is high enough to lower the gas fees and increase the throughput of ETH, then Ethereum may have a successful decentralized future. If other avenues imprinted on Layer-2 solutions can increase the affordability of transactions further, enabling micropayments like what µRaiden wants to do, then Ethereum can be truly decentralized. This may draw much more retail, everyday users to cryptocurrency through the many American or European crypto exchanges

When combining affordable layer-2 solutions with a broad and decentralized PoS system, Ethereum will lose much of its centralization.

Volatility and the Catch-22

The use of ETH is the key, but volatility is the restriction. While the types of price swings we have seen in the crypto markets remain (10% or more in a day), all crypto use will be limited. The problem is that while the volatility is high, the acceptance will not be widespread, and while acceptance is not widespread, volatility will be high.  

Getting transaction prices down and making micropayments possible will allow for more widespread use, and more widespread use means more stability. If the large holders can release their grip that is centralizing Ethereum, then they will likely do better in the long run for the greater good of decentralization. 

We are incredibly positive about the Ethereum network and look forward to its decentralization. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

design and development by covio.fr