Robotics and AI 

While it is obvious that artificial intelligence (AI) and robotics are different disciplines, robots can perform without AI. However, robotics reaches the next level when AI enters this mix. 

We will explain how these disciplines differ and explore spaces where AI is utilized to create envelope-pushing robotic technology. 

Robotics in Brief

Robotics is a subset of engineering and computer science where machines are created to perform tasks without human intervention after programming.

This definition is broad, covering everything from a robot that aids in silicon chip manufacturing to the humanoid robots of science fiction, and are already being designed like the Asimo robot from Honda. In global finance, we’ve had robo-advisors working with us for some years already. 

Courtesy of Honda

Robots have traditionally been used for tasks that humans are incapable of doing efficiently (moving an assembly line’s heavy parts), are repetitive, or are a combination. For example, robots can accomplish the same task thousands of times a day, whereas a human would be slower, get bored, make more mistakes, or be physically unable to complete it.

Robotics and AI

Sometimes these terms are incorrectly used interchangeably, but AI and robotics are very different. In AI, systems mimic the human mind to learn through training to solve problems and make decisions autonomously without needing specific programming (if A, then B).  

As we have stated, robots are machines programmed to conduct particular tasks. Generally, most robotics tasks do not require AI, as they are repetitive and predictable and not needing decision-making.

Robotics and AI can, however, coexist. Robotic projects that use AI are in the minority, but such systems are becoming more common and will enhance robotics as AI systems grow in sophistication.

AI-Driven Robots

Amazon is testing the newest example of a household robot called Astro. It is a self-driving Echo Show. The robot uses AI to navigate a space autonomously, acting as an observer (using microphones and a periscopic camera) when the owner is not present. 

This type of robot is not novel; robotic vacuums have been in our homes, navigating around furniture, for almost a decade. But even these devices are becoming “smarter” with improved AI. 

The company behind the robot vacuum Roomba, iRobot, announced a new model that uses AI to spot and avoid pet poop.  

Robotics and AI in Manufacturing

Robotic AI manufacturing, also known as Industry 4.0, is growing in scope and will become transformational. This fourth industrial revolution may be as simple as a robot navigating its way around a warehouse to systems like that of Vicarious, who designs turnkey robotic solutions to solve tasks too complex for programmed-only automation.  

Vicarious is not alone in this service. For example, the Site Monitoring Robot from Scaled Robotics can patrol a construction site, scanning and analyzing the data for potential quality issues. In addition, the Shadow Dexterous Hand is agile enough to pick soft fruit from trees without crushing it while learning from human examples, potentially making it a game changer in the pharmaceutical industry. 

Robotics and AI in Business

For any business needing to send things within a four-mile radius, Starship Technologies has delivery robots equipped with sensors, mapping systems, and AI. Their wheeled robot can determine the best routes to take on the fly while avoiding the dangers of its navigating world.

In the food service space, robots are becoming even more impressive. Flippy, the robotic chef from Miso Robotics, uses 3D and thermal vision, learning from the kitchen it’s in, and acquiring new skills over time, skills well beyond the name it earned by learning to flip burgers.  

Flippy, the robot chef from Miso Robotics

Robotics and AI in Healthcare

Front-line medical professionals are tired and overworked. Unfortunately, in healthcare, fatigue can lead to fatal consequences.

Robots don’t tire, which makes them a perfect substitute. In addition, Waldo Surgeon robots perform operations with steady “hands” and incredible accuracy.

Robots can be helpful in medicine far beyond a trained surgeon’s duties. More basic lower-skilled work performed by robots will allow medical professionals to free up time and focus on care. 

The Moxi robot from Diligent Robotics can do many tasks, from running patient samples to distributing PPE, giving doctors and nurses more of this valuable time. Cobionix developed a needleless vaccination administering robot that does not require human supervision. 

Robotics and AI in Agriculture

The use of robotics in agriculture will reduce the effect of persistent labor shortages and worker fatigue in the sector. But there is an additional advantage that robots can bring to agriculture, sustainability. 

Iron Ox uses robotics with AI to ensure that every plant gets the optimal level of water, sunshine, and nutrients so they will grow to their fullest potential. When each plant is analyzed using AI, less water and fertilizer are required producing less waste. 

The AI will learn from its recorded data improving that farm’s yields with every new harvest.

The Agrobot E Series has 24 robotic arms that it can use to harvest strawberries, and it uses its AI to determine the ripeness of the fruit while doing so.

Courtesy of Agrobot

Robotics and AI in Aerospace

NASA has been working to improve its Mars rovers’ AI while working on a robot to repair satellites.  

Other companies are also working on autonomous rovers. Ispace’s rover uses onboard tools, and maybe the device hired to lay the ‘Moon Valley’ colony’s future foundation.  

Additional companies and agencies are trying to enhance space exploration with AI-controlled robots. For example, the CIMON from Airbus is like Siri in space. It’s designed to aid astronauts in their day-to-day duties, reducing stress with speech recognition and operating as a system for problem detection.   

When to Avoid AI?

The fundamental argument against using AI in robots is that, for most tasks, AI is unnecessary. The tasks that are currently being done by robots are repetitive and predictable; adding AI to them would complicate the process, likely making it less efficient and more costly.

There is a caveat to this. To date, most robotic systems have been designed with AI limits in mind when they were implemented. They were created to do a single programmed task because they could not do anything more complex. 

However, with the advances in AI, the lines between AI and robotics are blurring. Outside of business- or healthcare-driven uses, we’ve noticed how AI facilitates the relatively new, lucrative field of algorithmic trading becoming increasingly available to retail investors. 

Closing Thoughts

AI and robotics are different but related fields. AI systems mimic the human mind, while robots help complete tasks more efficiently. Robots can include an AI element, but they can exist independently too.  

Robots designed to perform simple and repetitive tasks would not benefit from AI. However, many AI-free robotic systems were created, accounting for the limitations of AI at their time of implementation. As the technology improves, these legacy systems may benefit from an AI upgrade, and new systems will be more likely to build an AI component into their design. This change will result in the marrying of the two disciplines.  

We have seen how AI and robotics can aid in several different sectors, keeping us safer, wealthier, and healthier while making some jobs easier or performed more efficiently entirely by robots. However, we also consider a possible change in employment structure. People will be outsourced to robots, and they must be accounted for with training and other options for employment. 

With the combination of AI and robotics, significant changes are on our horizon. This combination represents the very forefront of innovation. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

The Uses of Chatbots Like ChatGPT

When we used to hear the word “chatbots,” pain often comes to mind. Frustration with the novel was the norm. With chatbots that were mostly able to receive a question and reply, “I am sorry, I can’t answer that. However, I will contact someone that can help you with it. You should receive a reply in 24 hours.” 

Yet, chatbots have come a long way, and the next-generation bots, like the new Chat GPT and those under development by Google, are excellent. They will become a vital part of the customer experience and take the burden of repetitive tasks, simple tasks, and questions away from agents while improving satisfaction results by quickly providing the info clients need.  

Chatbots in Brief

Chatbots had evolved since their inception when programmers wanted to surpass the Turing Test and create artificial intelligence. For example, in 1966, the ELZA program fooled users into thinking they were talking to a human.  

A chatbot is a computer program often using scripts that can interact with humans in a real-time conversation. The chatbot can respond with canned answers, handle different levels of requests (called second and third-tier issues), and can direct users to live agents for specific tasks.  

Chatbots are being used in a wide variety of tasks in several industries. Mainly in customer service applications, routing calls, and gathering information. But other business areas are starting to use them to qualify leads and focus large sales pipelines. 

The first chatbots of over 50 years ago were intended to show the possibilities of AI. In 1988 Rollo Carpenter’s Jaberwacky was designed more for entertainment but could learn new responses instead of relying on canned dialog only. As they progressed, chatbots surpassed “pattern matching” and were learning in real-time with evolutionary algorithms. Facebook’s Messenger chatbot of 2016 had new capabilities and corporate use cases.  

The general format of a Chatbot system takes inputs looking for yes-no answers or keywords to produce a response. But chatbots are evolving to do more comprehensive processes, including natural language processing, neural networks, and other machine learning skills. These chatbots result in increased functionality, enhanced user experiences, and a more human-like conversation that improves customer engagement and satisfaction.  

Benefits of Chatbots

Improved customer service. Clients want rapid and easy resolutions. HubSpot found that 90% of customers want an immediate response to customer service issues

This is seen with the increase in live chat, email, phone, and social media interactions. Chatbots can provide service to users 24/7, handling onboarding, support, and other services. Even robo-advisors can use chatbots as a first line of contact. 

More advanced systems can pull from FAQs and other data sources that contain unstructured data like old conversations and documents. Chat GPT uses a massive supply of information up to its 2021 cutoff point.

Improved sales. Chatbots can qualify leads and guide buyers to information and products that fit their needs, producing a personalized experience that builds conversions. For example, they can suggest promotions and discount codes to boost purchase likelihood. They can also be a checkout page aid to reduce cart abandonment. 

Money savings. The goal of chatbot deployment for service and sales support is often to reduce casts. Chatbots can service simple and repetitive tasks allowing human agents to focus on complex issues. 

For example, if a small HR team is slowed with holiday and benefits questions, a chatbot can answer 90% of these, lessening the HR team’s load. An Oracle survey found that chatbots could produce savings of more than half of a business’s upfront costs. While the upfront costs of chatbot implementation are high, the long-term cost savings in staff equipment, wages, and training will outweigh the initial spending.  

Chatbot Implementation Mistakes

While chatbots cannot do everything yet, and it will be a long time before they can do many tasks, they have a skill set that can be used. They can help humans, allowing them to work on more human-required tasks.

No human option. This is a mistake many companies make. Chatbots cannot solve all problems, and the client should have a way to escalate their interaction to a human who can solve it.  

Lacking customer research. A bot needs to know what to look for and what to address. If an implementation starts with the most common and time-consuming questions and decides if a chatbot can solve these, it will prove its value many times over. 

Neglecting tool integration. A well-built chatbot will be part of the contact center platform, aiding agents and supervisors. Able to pull info from multiple sources and escalate to a live agent with useful contextual information allowing the agent to quickly take over from where the chatbot ended.

Use Cases of Chatbots

How can businesses use chatbots? Here are a few examples of great implementations improving customer service and outcomes.  

Retail Banking

Banks or online brokers will generally field simple questions from depositors and borrowers. However, many may come at times of vulnerability. The rising cost of living means a closer focus on finances. Clients may have pending transactions, payments, fraud, or other issues; technology could allow them to monitor these in real time. 

If there is only a call center to address these issues, they will have added pressure. But these can be addressed across multiple channels. A banking chatbot with sentiment analysis can handle text-based digital channels (web chatbot, social media, SMS messaging). 

Launched on the website, mobile app, and social media, this virtual assistant can handle first and second-tier queries (credit card payments, checking account balances). The implementation of sentiment analysis can detect upset customers, quickly getting them to a natural person. 

Chatbots can also aid with the creation of balance alerts, alter other settings, and set up payment reminders, ensuring that both the present issue is solved and the likelihood of a future issue is reduced. 

Property Management

As a commercial and residential real estate business grows, more calls are coming in from customers covering a wide range of issues (rent, maintenance, renovations, and potential customers). As a result, they are taking up the contact center’s resources. For example, a chatbot could answer routine renters’ questions, guide them to self-service solutions, or submit a service ticket. 

Chatbots can also collect info that will allow the direction of their query to relevant categorical information or help from the related agent. This reduces high call volume and becomes a source to produce tickets 24/7, not just when the office is open, providing notifications to the clients when their submissions are updated. Chatbots can also be set for rent reminders via text and provide online payment options to improve on-time payments—a win-win for the user and the company’s bottom line.  

Logistics

Logistics customers want to know where their items are and in real-time. Accurate tracking info is more widely available, but with logistics, there are many variables to contend with on the global level. In addition, high volumes of location requests can overpower a company; even if they are simple requests, they stretch a company’s resources. 

A chatbot can deflect many calls from the call center to automated phone response or web services that have a text chat service, providing callers with a way to track their packages and lowering the strain on the service staff, allowing them to focus on complicated issues.  

Direct-to-Consumer Retail

Online retailers have a lot of spinning plates. Supply chain, warehousing, couriers, drop shippers, and other order fulfillment, and running an E-com site. When one piece fails, there are unhappy customers. If a manufacturer has assembly issues for a hot new product, the company may experience high call volume and service requests, resulting in many refunds and returns. 

An AI-powered chatbot like ChatGPT can be a lifesaver, guiding customers to troubleshooting and instructional media such as video tutorials or the webpage’s knowledge base. It can also take customer feedback and use this information to improve service outcomes, further optimizing flow. 

It can also be helpful in the returns process, streamlining the system, resolving returns without the need for a human team member. In addition, by deflecting most inbound calls to self-service, the call center’s volume is decreased, reducing wait times and producing cost savings. The chatbot could also generate viable leads helping consumers find the right products for their needs while upselling products and services through personalized recommendations.  

Closing Thoughts

All of the use cases for chatbots provided above are currently being employed and are solutions that use chatbots that are less sophisticated than ChatGPT. However, chatbots can provide higher levels of service that can instantaneously scale with a business while doing so at an attractive ROI. 

There are thousands of chatbot implementations possible for today’s businesses, allowing customers to get the real-time service they need with more personalization and specificity than before; this will only continue to improve and expand, allowing more to be provided to consumers.

As chatbots improve their capabilities, their use will likely broaden in scope and volume. Many things humans did in the past, or do now, will be replaced by the faculties of ever-advancing chatbots. These humans will need to be trained to do other work or higher-level service tasks so that we don’t have a glut of out-of-work service personnel. 

On the other hand, this training will result in more satisfying work for employees, which in the long run can improve their lives. Balance is needed to gain further acceptance of chatbots by employees and the populace as a whole.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AI’s Transformation of Oncology

Artificial intelligence (AI) is constantly reshaping our lives. It saves companies and us time and money, but it has applications that can be applied to medicine, potentially saving our lives. 

We can understand AI’s evolution and achievements to model future developmental strategies. One of AI’s most significant medical impacts is already being seen in and will continue in oncology. 

AI has opened essential opportunities for cancer patient management and is being applied to aid in the fight against cancer on several fronts. We will look into these and see where AI can best aid doctors and patients in the future. 

Where Did AI Come From?

Alen Turing first conceived the idea of computers mimicking critical thinking and intelligent behavior in 1950, and by 1956 John McCarthy came up with the term Artificial Intelligence (AI). 

AI started as a simple set of “if A then B” computing rules but has advanced dramatically in the years since, comprising complex multi-faceted algorithms modeled after and performing similar functions to the human brain.

AI and Oncology

AI has now taken hold in so many aspects of our lives that we often do not even realize it. Yet, it remains an emerging and evolving model that benefits different scientific fields, including a pathway of aid to those who manage cancer patients.  

AI has a specific task that it excels at. It is especially good at recognizing patterns and interactions after being given sufficient training samples. It takes the training data to develop a representative model and uses that model to process and aid decision-making in a specific field

When applied to precision oncology, AI can reshape the existing processes. It can integrate a large amount of data obtained by multi-omics analysis. This integration is possible because of advances in high-performance computing and several novel deep-learning strategies. 

Notably, applications of AI are constantly expanding in cancer screening and detection, diagnosis, and classification. AI is also aiding in the characterization of cancer genomics and the analysis of the tumor microenvironment, as well as the assessment of biomarkers for prognostic and predictive purposes. AI has also been applied to follow-up care strategies and drug discovery.  

Machine Learning and Deep Learning

To better understand the current and future roles of AI, two essential terms fall under the AI umbrella that must be clearly defined: machine learning and deep learning.

Machine Learning

Machine learning is a general concept that indicates the ability of a machine (a computer) to learn and therefore improve patterns and models of analysis.  

Deep Learning

On the other hand, deep learning is a machine learning method that utilizes algorithmic systems that mimic a system of biological neurons called deep networks. When finalized, these deep networks have high predictive performance.  

Both machine and deep learning are central to the AI management of cancer patients.  

Current Applications of AI in Oncology

To understand the roles and potential of AI in managing cancer patients and show where the future uses of AI can lead, here are some of the current applications of AI in oncology.  

With the below charts, “a” refers to oncology and related fields and “b” to types of cancers for diagnosis. +

Courtesy of the British Journal of Cancer; a. oncology and related fields: cancer radiology 54.9%, pathology 19.7%, radiation oncology 8.5%, gastroenterology 8.5%, clinical oncology 7.0%, and gynecology 1.4% b. tumor types: general cancers 33.8%, breast cancer 31.0%, lung cancer 8.5%, prostate cancer 8.5%, colorectal cancer 7.0%, and brain tumors 2.8%, others: 6 tumor types, 1.4% each.

The above graph, from the British Journal of Cancer, summarizes all FDA-approved artificial intelligence-based devices for oncology and related specialties. The research found that 71 devices have been approved. 

As we can see, most of these are for cancer radiology, which makes us correctly assume that it is for detecting cancer through various radiological scans. According to the researchers, of the approved devices, the vast majority (>80%) are related to the complicated area of cancer diagnostics.

Courtesy of cancer.gov

The image above shows a deep learning algorithm trained to analyze MRI images and predict the presence of an IDH1 gene mutation in brain tumors.

Concerning different tumor types that AI-enhanced devices can investigate, most devices are being applied to a broad spectrum of solid malignancies defined as cancer in general (33.8%). However, the specific tumor that counts for the most significant number of AI devices is breast cancer (31.0%), followed by lung and prostate cancer (both 8.5%), colorectal cancer (7.0%), brain tumors (2.8%) and six other types (1.4% each). 

Moving Forward with AI

From its origin, AI has shown its capabilities in nearly all scientific branches and continues to possess impressive future growth potential in oncology.  

The devices that have already been approved are not conceived as a substitution for classical oncological analysis and diagnosis but as an integrative tool for exceptional cases and improving the management of cancer patients. 

A cancer diagnosis has classically represented a starting point from which appropriate therapeutic and disease management approaches are designed. AI-based diagnosis is a step forward and will continue to be an essential focus in ongoing and future development. However, it will likely be expanded to other vital areas, such as drug discovery, drug delivery, therapy administration, and treatment follow-up strategies.

Current cancer types with a specific AI focus (breast, lung, and prostate cancer) are all high in incidence. This focus means that other tumor types have the opportunity for AI diagnosis and treatment improvements, including rare cancers that still lack standardized approaches. 

However, rare cancers will take longer to create large and reliable data sets. When grouped, rare cancers are one of the essential categories in precision oncology, and this group will become a growing focus for AI.  

With the positive results that have already been seen with AI in oncology, AI should be allowed to expand its reach and provide warranted solutions to cancer-related questions that it has the potential to resolve. If given this opportunity, AI could be harnessed to become the next step in a cancer treatment revolution.  

Closing Thoughts

Artificial intelligence (AI) is reshaping many fields, including medicine and the entire landscape of oncology. AI brings to oncology several new opportunities for improving the management of cancer patients. 

It has already proven its abilities in diagnosis, as seen by the number of devices in practice and approved by the FDA. The focus of AI has been on the cancers with the highest incidence, but rare cancers amount to a massive avenue of potential when grouped.  

The next stage will be to create multidisciplinary platforms that use AI to fight all cancers, including rare tumors. We are at the beginning of the oncology AI revolution. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Brain-Computer Interfaces and the Metaverse

What are the commercial promises of brain-computer interfaces, and how will they further connect us to the promises of the metaverse? These interfaces, initially sensory (on the scalp or skin), and possibly through brain implants in the future, could be the eventual platforms transforming all parts of our diverse societies. 

The Brain-Computer Merge

You may not have noticed, but with each passing day, we are slowly merging more and more with the technology around us. Our smartphones are our tools for instant communication and the answers to many of our questions, allowing us to focus on other things rather than that which occupied our minds in the past. 

We have implanted pacemakers and defibrillators that tell the cardiologist all about our hearts and correct our irregularities. We have implanted lenses in our eyes to fix vision issues. The technology around us now, especially with our smartphones, will not represent the most common interface in our future. 

What our smartphones do, and much more will likely be incorporated into our bodies. Though google glass was not a successful project, many of its users were the wrong targets, and it was also burdened with tech glitches and security concerns. It did, however, show that we could bring technology closer, supplying useful information and sending sound directly into the ear with bone conduction. 

Source: The Verge

As brain-computer interface (BCI) systems progress, they will be an essential step forward in the brain-computer merge. A BCI’s role is the interpretation of the user’s neural activity. A BCI is just part of an environment that is more wired, has more sensors, and is digitally connected.   

With the current generation of experimental brain-computer interfaces, using only their minds, humans can play video games, articulate prosthetic limbs, control their own limbs, work wheelchairs, and more. BCIs have the potential to communicate with patients that suffer from Alzheimer’s disease, head injuries, and stroke, allowing them to control computers that help them speak.  

BCI technology will likely take a turn for enhancing sensory connection and communication. The most common use for BCI technology is the directional control of a computer cursor. Imagine moving your mouse and clicking without the need for the mouse. 

This is already being done only with electrophysiological signals (brain and blood signals to a system of sensors). This BCI control system has already been utilized by users (both humans and animals) to control the external world without the need for conventional neuromuscular pathways (speech).  

Brain-Computer Interfaces Alongside the Metaverse

The metaverse is a fusing of the real and digital worlds. It’s either an entirely simulated digital environment, as is the case of virtual reality (VR), or an overlay of a digital experience to the real world with augmented reality (AR). 

Thought of in a different way, the metaverse can be a platform where users can feel the real through an animated or digital world encounter. The metaverse that combines augmented reality with the real world can give us more immersive, next-level platforms. The metaverse is intended to make our lives more natural and “realistic,” including socializing, work, and entertainment.  

Scientists, researchers, corporations, and entrepreneurs are making strides with their new and advanced applications. Many of these applications are intended to augment human abilities, fulfilling desires to be stronger, smarter, and better looking. 

Exoskeleton by SuitX

With the BCI connection, it’s believed that part of this initiative will transform technology, medicine, society, and the future. Current devices can cultivate human abilities that exceed the former standards and are not dissimilar to the great powers of Iron Man. SuitX’s Exoskeleton can reduce lower back loads by 60%.  

As these technologies continue to merge with BCIs, it’s believed that the opportunity to augment human capability will be even greater.  

Elon Musk’s Neuralink has been working on a consumer-intended high-bandwidth BCI that focuses on four parts of the brain. 

Source: Neuralink

Neuralink has shared their video of a macaque playing “MindPong” by way of chips embedded in a few regions of its brain. The primate was trained to play the game by simply thinking about moving its hands. The goal is for future “Neuralinks” to tie the brain to the body’s motor and sensory cortices, thereby enabling people with paraplegia the ability to walk again. 

Inside a Metaverse

Technical training inside a metaverse consists of providing technicians with advanced features and simulations capable of operating 3D representations of complex systems, instruments, or machinery. 

BCIs with simulation technology will combine to empower the metaverse, allowing remote support and maintenance of devices and equipment. This could be a matter of connecting with experts who would control the repair of the system by thinking about moving their own hands to make repairs. 

This would allow for the “switching on” of virtual reality engineers and technicians when an unforeseen repair occurs. It’s not so far of a step beyond this to think of the same procedure for doctors and surgeons.

Dating and socializing in virtual reality may become a common occurrence with virtual movies and museum tours. Such interactions could be enhanced with the direct brain interface that enriches the mind of our partners, adding to positive experiences from the external environment (“I wish you could see things from my point of view” would be possible).  

Closing Thoughts

Applications of brain-computer interfaces are spread across many fields and are not limited to military or medical purposes. The fullest realization of these technologies will certainly take time and incremental improvements, but they will be well-suited for the metaverse. 

This process will require significant testing and a long period of adoption. However, brain interfaces can be game changers in their lives and incredible experiences for many.  

We could eventually see a future that no longer has brain-computer interfaces but goes toward the next step of direct brain-to-brain connections. This new type of connection is a very exciting step that would bring humans closer together, allowing us to understand how we all experience the real and virtual worlds.  

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Autonomous Ride-Hailing

Uber and Lyft have changed the short-distance ride-hailing market once belonging to the local and usual handful of taxi companies. As they took over these new markets, they also changed the way we thought about travel.

Further, asking friends for a ride to the airport is beginning to disappear with the onset of autonomous vehicles. Several new companies are testing this out, and some are in full operation in limited areas with complete, autonomous ride-share services. We take a deep dive into the current state of the autonomous ride-hailing market. 

The Rise of Autonomous Vehicles

Autonomous technology is the next stage for the travel industry. The growing success of the electric vehicle set the tone, even if battery costs have a long way down to go. But it’s better to call this one door leading to many. 

For example, artificial intelligence will play a crucial role in the use of autonomous ride-hailing. We have: route optimization, accident prevention, and maximized utilization (keeping all vehicles active). Not only does this lower costs for companies entering this space, it dramatically improves urban efficiency. 

ARK Investment Research has predicted that the price of autonomous electric vehicle transportation will fall to $0.25 per mile by 2030.

These three factors will drive the cost of ride-hailing services. However, industrialized countries will also see a massive reduction in the cost per mile as labor makes up over 70% of the cost, which is followed by the vehicle itself, and its fuel and maintenance. ARK Research has estimated that the price per mile could be reduced by up to 88% for an autonomous ride-hail.

The autonomous ride-share total addressable market (TAM) is estimated to reach between $11 and $12 trillion for two key reasons. 

1.     High utilization rates. Electric autonomous vehicles can provide rides to clients 24 hours a day, only offline during charging and maintenance times.  

2.     Low operation costs. The cost of a ride-hail will drop to $0.25 due to several factors.  Accidents per mile driven by autonomous vehicles are already lower than by human drivers, and with more autonomous vehicles on the roads, this will drop further. Autonomous vehicles drive in a more efficient way, also reducing fuel costs up to 44% for passenger vehicles and 18% for trucks.  

Autonomous Ride-Share Programs

Cruise

Cruise is a subsidiary of General Motors and became the first company to begin an autonomous ride-hailing service in a major city. In June 2022, Cruise received approval from the California Public Utilities Commission and started its public, driverless, fared, autonomous ride-hailing. 

Cruise launched with a fleet of 30 autonomous all-electric Chevy Bolts. These small cars ferry passengers around many parts of the city, and the service is currently available daily from 10 p.m. to 6 a.m. (provided “normal” weather conditions).

Source: Cruise

Cruise vehicles are limited to a maximum of 30 mph and cannot operate if there is heavy rain, fog, smoke, hail, sleet, or snow. Cruise is looking to add more Chevrolet Bolts to its fleet and increase the time it’s allowed to operate. 

Since 2020 Cruise has delivered a total of 2.2 million meals to San Francisco’s needy through a partnership with local food banks. Cruise has also begun the groundwork for autonomous ride-hailing services to launch in Dubai in 2023 and later in Japan.

Baidu

Chinese Technology giant Baidu began its Autonomous Driving Unit (ADU) in 2014 to design vehicles that could move passengers without the need for a driver. Baidu launched its “Apollo Go” self-driving robo-taxi business in 2017, and they recently upped the ante with their Baidu Apollo RT6 Autonomous Driving Vehicle in July 2022. 

In that same month, they received approval from the Beijing authorities to launch a robo-taxi service within a Beijing suburb. The new Apollo RT6 has a detachable steering wheel because the car no longer needs a driver. 

Source: Baidu

In August 2022, Baidu also obtained the permits to operate a fully autonomous taxi service in two Chinese megacities, Wuhan (11 million residents) and Chongqing (30 million residents). Baidu’s 100% autonomous robo-taxi services will begin on a small scale with a fleet of only five vehicles in each city and provide their service in designated areas from 9:30 a.m. to 4:30 p.m..  

Source: Baidu

Pony.ai

Pony has also received permits from Beijing authorities to provide their fair-charging, driverless robo-taxi service in July 2022. With this new permit, they are now able to charge fares for rides within a 60 square kilometer area (23.1 sq miles) in Beijing’s Yizhuang suburb. 

The service area includes public facilities like underground stations, parks, and sporting centers, as well as key residential and business districts. The new permit builds upon two other recent Beijing autonomous vehicle milestones. Pony.ai was allowed to launch a robo-taxi service with safety drivers in November 2021. 

Source: pony.ai

Since November 2021, Pony.ai has provided over 80,000 rides from 200 pickup or drop-off locations.  And by July 2022, their robo-taxi service called “PonyPilot+” completed a total of 900,000 orders with nearly 80% from repeat customers. Further, 99% of the passengers provided positive reviews once the trip was complete, with an average 4.9-star rating on a 5-point scale. 

Hyundai Motors

Korean automaker Hyundai launched is RoboRIde autonomous ride-hail service in Gangnam Seoul. The South Korean Land, Infrastructure, and Tourism Ministry issued Hyundai with permits to operate their autonomous vehicles in Seoul. 

The Seoul Metro Government established a system that connects traffic signals with autonomous vehicles. This system also supports autonomous vehicles with remote functions, such as lane changing under circumstances where fully autonomous driving is not feasible. 

Hyundai has been testing autonomous driving in Gangnam since 2019. The program so far includes only two self-driving IONIQ-5 vehicles, operating from Monday to Friday from 10 a.m. to 4 p.m. with up to three passengers. The program is slated to expand to the general public after successful tests. 

Source: SAE

Waymo One

The autonomous ride-hailing service from Alphabet (Google) started as the Google Car and has been running autonomous rides in the Phoenix metro area. It has recently expanded its program from the east valley suburbs, where it’s charging fares, to a new pilot program in central Phoenix. 

Both services run 24 hours a day, seven days a week. In their 2021 safety report, Waymo states that they have driven millions of miles on public roads in their ten years of service and, with simulations, have completed billions of driving miles.  

Source: Waymo

Closing Thoughts

As the number of autonomous vehicle ride-hailing projects increases, we will become increasingly used to the idea. The number of miles driven (both actual and virtual) will continue to grow, and as this happens, the insurance industry will begin to push toward autonomous driving. 

For the U.S.A. and other industrialized countries, the driving costs are high for human-driven vehicles. Economics alone will push for autonomy. The benefits of optimized fuel use and reduced traffic will continuously argue in favor of autonomous driving. We will soon all be passengers.

Disclaimer: The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AI in Agriculture 

Artificial intelligence, drones, and robots are already being deployed on large farms to assist with several farm management tasks for crops and livestock. However, there are some risks that must be accounted for when turning over our food production to AI-driven machines. 

We will discuss the benefits that AI can bring to the world of agriculture, including some applications that are already in place to help our farmers produce more and better-quality food. We will then discuss some potential pitfalls we must be aware of if we turn over our food supply to machines. 

AI’s Potential

AI has brought to the world countless tools for personal and industrial use. With agriculture, it has delivered the potential to increase yields, keep pests away, and reduce costs in nearly all parts of farm management. 

Our farmers need to know how best to use these tools, and we need to understand how their application can be a benefit. There are already AI applications that are worthwhile and are providing users with successful results. Let us see how the grass is greener on the AI side.

The Smart Farm

AI is leading to smart farms with farming models that have high cognitive ability.  This technology is focused on a few specific areas.

Data and Analysis

With new equipment, farms can be set up to track and analyze multiple data points. For example, a farmer can use a drone to review a large tract of land and identify the exact location of a pest infestation or plant disease in real-time. This mass of data has boosted information accuracy and can help farmers make informed decisions when analyzed with AI models.

Robotics and Automation

Robots are used for farm activities such as picking, thinning, and sorting to speed up manual labor work and deal with any labor shortages. The goal is to increase productivity, consistency, and quality while minimizing errors.

Predictions

AI models have been designed to predict changes to weather patterns, soil erosion, and pest infestations to improve farm management and planning. These tools allow farmers to see into the future, assisting them with informed decision-making.  

Like other industries, agriculture faces similar constraints related to its use of AI, such as compatibility with current technology, resource availability, security, and potential regulatory issues. Even with these constraints, the future farms will be highly dependent on AI, making them more precise and creating a new “cognitive farm.” 

Digital Farmers

AI is revolutionizing one of our oldest industries and giving farmers multiple ways to produce more abundant harvests in all parts of the world. With this transformation, farms will now require digital farmers, men and women, which can push forward these technological changes, managing future farms in new ways.  

Tools and People

New farm managers must understand and use the correct tools to their farm’s benefit. While extensive technical knowledge is not needed, understanding the basic principles behind the technology and, more importantly, the technology’s operational implications are necessary.  Through AI, farm managers can better understand the inner workings of their farms.

The changing technology means that farm talent must be updated. Beyond the typical farming roles, farms will require employees with technological skills. The entire organization will need defined education to stay on top of the AI farming future.  

New Ways of Farming

Farmers will need to leave their comfort zones and explore new collaborative opportunities. This change will involve collaboration with new companies to obtain cutting-edge technologies that will allow a farm to acquire a competitive advantage and boost productivity. These partnerships provide inimitable technologies, giving farmers the upper hand, but these technologies work best for large farms.  

Cost advantages are most significant with economies of scale.  So, managers will benefit by finding strength in numbers.  AI tools can be expensive, beyond the reach of the small farm.  When collaborating with other farms, cooperatives, suppliers, universities, local communities, and the government, these costs can be driven down. 

AI’s Current Applications

AI currently monitors soil, detects pests, determines diseases, and applies intelligent spraying. Here are a few of the current applications farmers are already using today. 

Crop Monitoring

Crop health relies on micro and macronutrients in the soil to produce yields but with quantity and quality. Once the crops are planted, monitoring their growth to optimize production is also needed. Understanding the interaction between growth and the environment is vital to adjust for healthy crops. Traditionally this was done through human observation and experience, but this method is neither accurate nor speedy. 

Now drones capture aerial data, then train computer models to intelligently monitor crops and soil. This AI system can use the collected data to:

  • Track the health of the crops
  • Accurately predict yields
  • Identify crop malnutrition

This can all be done faster than a human could, in real-time, providing farmers with specific problem areas so they can take immediate actions to prevent problems before they grow.  

Determining Crop Maturity

Wheat head growth is a labor-intensive process that can be aided with AI. Over a three-year period, researchers collected wheat head images at different stages with different lighting, building a two-step wheat ear detection system. The AI model was able to outperform human observation, allowing farmers not to have to make daily visits to fields to check on the crops.  

Similarly, tomato ripeness has been determined with AI. 

A different study examined how well AI can detect maturity in tomatoes.  The researchers built a model looking at the color of five different parts of a tomato, then made maturity estimates.  The algorithm could correctly classify tomatoes with a 99.31% accuracy. 

Generally, evaluating soil involves digging up samples and sending them to the lab for analysis. AI researchers have used image data from a cheap microscope to train their model to do the same task. The model was able to make sand content and soil organic matter estimates with accuracy similar to costly and slower lab analyses. 

Disease and Insect Detection

Using deep learning, farmers are now automating the detection of plant diseases and pests.  This is done through image classification and segmentation. 

Source: V7 labs

A study looked at the apple black rot and used a deep neural network AI model to identify the four stages of disease severity. Like with the other models above, the disease identification process is labor-intensive. This project was able to identify the disease severity at an accuracy of 90.4%.  

Similarly, a different study was able to use the YOLO v3 algorithm and was able to identify multiple pests and diseases on tomato plants. Using only a digital camera and smartphone, researchers identified twelve different cases of disease or pests. Once trained, it was able to detect problems with an accuracy of 92.39%, taking only 20.39 milliseconds. 

Source: Frontiers In

Another study used sticky traps to collect six flying insects and collect images. They then based the course counting on object detection and fine-counting results. The model identified bees, mosquitoes, moths, flies, chafers, and fruit flies with a 90.18% accuracy and a 92.5% counting accuracy.  

Livestock Monitoring

Animals are a major component of our food system and need even more tracking than plants.  Companies are now offering tools to track cattle and chickens. CattleEye tracks and annotates key points for individual cows. 

Source: CattleEye

The system uses overhead cameras to monitor animal health and behavior, allowing a rancher to spot a problem and be notified without being next to the cow.  

By collecting data with cameras and drones, this kind of software is being used to count animals, detect disease, monitor birthing, and identify unusual behavior. It also confirms access to food and water. 

Smart Spraying

AI also prevents problems in the first place. Drones help with the spraying of fertilizer and pesticides uniformly across a field. They operate with high precision in real-time, spraying correctly and reducing contamination risk to animals, humans, and water resources.  

This is a growing field and is best performed by multiple drones, but intelligent spraying is getting better. Virginia Tech researchers developed a smart spray system that can detect weeds. 

A camera mounted on a sprayer records the geolocation of the weeds, analyzing their size, shape, and color, and then delivers a precise amount of herbicide. 

Source: Researchgate

The device’s accuracy prevents collateral damage to other crops in the environment.  

Risks of AI in Agriculture

All these different AI applications will help us monitor and improve our food systems, helping feed the 2.4 billion people suffering from food insecurity. AI can reduce labor inefficiency and increase reliability. However, there are some cautionary tales. 

According to a release by Asaf Tzachor of Cambridge University, there could be flaws in the agricultural data, emphasizing productivity over environmental concern. This focus could lead to errors that cause over-fertilization and pesticide use, improper irrigation, and soil erosion.  These factors must also be considered when designing AI systems. Inadvertent changes resulting in crop failures could result in massive food insecurity.  

Cybersecurity is a second issue. Cyberattacks could disrupt entire food systems, especially for farms that rely heavily on AI.

Finally, those without access to the new technology could be cut out of markets. Big farmers will profit, and small farms will be locked out of the gains entirely if they cannot afford the AI infrastructure. 

Planning Ahead

As in all enterprises, diligence and conscientious planning contribute to farming success.  Farmers must plan their AI strategy by optimizing their operations and yield requires thoughtful assessment. This planning involves a thorough review of priorities and a clear implementation plan.  

AI provides tools that can boost a farm’s yields, and transform the industry. Increases in agricultural production on a large scale will impact a country’s GDP, increase food security, and positively impact the environment. The US had just over two million farms in 2021, averaging 445 acres each, totaling 89.5 million across the country.  

Analytics and robotics boosts production on almost any farm. AI-related productivity gains can reshape the farming business and improve our global food supply. This is a way we can counteract the climate factors that could affect corn, rice, soy, and wheat production by 20-49%.

Closing Thoughts

Since the advent of agriculture, technology has improved its efficiency. From plows and irrigation to tractors and AI, we have moved forward to feed our growing population. With the ongoing changes to our climate, AI has arrived just in time to save us all from potential food insecurity. We must use AI to increase efficiency and reduce food production costs while also improving environmental sustainability. Doing so can make our farmers “smarter” and give us more and healthier foods.  

If small farmers can work together and take full advantage of these new AI tools, they can compete with large industrial farms. We also have to ensure that the systems that are put into place are safe and have an all-encompassing view that does not only focus on yields but the potential environmental effects. Sustainability remains crucial, and AI is the missing piece. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Spotting Deepfakes

A deepfake is a piece of image, audio, or video content using artificial intelligence to create a digital representation by replacing the likeness of one person with another. This advanced technology is becoming more common and convincing, leading to misleading news and counterfeit videos. 

We will delve deeper into deepfakes, discuss how deepfakes are created, why there are concerns about their growing prevalence, and how best to detect them so as not to be fooled into believing their content.  

Rise of the Machines

Advances in computers have allowed them to become increasingly better at simulating reality. What was once done taking days in the darkroom can be done in seconds using photoshop. For example, five pictures of the Cottingley Fairies tricked the world in 1917.  

Modern cinema now relies on computer-generated characters, scenery, and sets, replacing the far-flung locations and time-consuming prop-making that were once an industry staple.  

Source: The Things

The quality has become so good that many cannot distinguish between CGI and reality.

Deepfakes are the latest iteration in computer imagery, created using specific artificial technology techniques that were once very advanced but are beginning to enter the consumer space and will soon be accessible to all.  

What Are Deepfakes?

The term deepfake was coined from the underlying technology behind them, deep learning, a specific field of Artificial Intelligence (AI) or machine learning. Deep learning algorithms have the ability to teach themselves how to solve problems better, and this ability improves the more extensive the training data set provided to them. Their application to deepfakes makes them capable of swapping faces in video and other digital media, allowing for realistic looking but 100% fake media to be produced.  

While many methods can be applied to create deepfakes, the most common is through the use of deep neural networks (DNNs). These DNNs use autoencoders that incorporate a face-swapping technique. The process starts with a target video that is used as the basis of the deepfake (on the left above) and from there, a collection of video clips of the person (Tom Cruise) that you wish to overlay into each frame of the target video.

The target video and the clips used to produce the deepfake can be completely unrelated. The target could be a sports scene or a Hollywood feature, and the person’s videos to insert could be a collection of random YouTube clips.

The deep learning autoencoder is an artificial intelligence program tasked with selecting YouTube clips to understand how the person looks from several angles, accounting for different facial patterns and environmental conditions. It will then map that person into each target video frame to make it look original. 

An additional machine learning technique called Generative Adversarial Networks or GANs is added to the mix, which detects any flaws and improves the deepfake through multiple iterations. GANs are themselves another method used to create deepfakes. They rely on large amounts of data to learn how to create new examples that mimic the real target. With sufficient data, they can produce incredibly accurate fakes.  

Deepfake Apps

Deepfake apps have also hit the consumer market, such as Zao, FaceApp, DeepFace Lab, Face Swap, and the notorious and removed DeepNude–a particularly dangerous app that generated fake nude images of women.

Several other versions of deepfake software that have varying levels of results can be found on the software development open-source community GitHub. Some of these apps can be used purely for entertainment purposes. However, others are much more likely to be maliciously exploited.

How Are Deepfakes Being Used?

While the ability to swap faces quickly and automatically with an app and create a credible video has some interesting benign applications, in Instagram posts and movie production, deepfakes are obviously dangerous. Sadly, one of the first real-world deepfake applications was in the creation of synthetic pornography. 

Revenge Porn

2017 saw a Reddit user named “deepfakes” create a forum for porn featuring face-swapped actors.  Since then, the genre of “revenge porn” has repeatedly made the news. These deepfake use cases have severely damaged the reputations of celebrities, prominent figures, and even regular people.  According to a 2019 Deeptrace report, pornography constituted 96% of deepfake videos found online, and this has only dropped to 95% in 2022.  

Political Manipulation

Deepfakes have already been employed in political manipulation. Starting in 2018, for example, a Belgian political party released a video of, at the time, President Donald Trump giving a speech that called on Belgium to withdraw from the Paris climate agreement. The former president Trump never gave that speech. It was a deepfake. 

The Trump video was far from the first deepfake created to mislead, and many tech-savvy political experts are bracing for the future wave of fake news featuring convincingly realistic deepfakes. We have been fortunate not to have so many of them during the 2022 midterms, but 2024 may be a different story. They have, however, been used this year to change the course of the war in Ukraine.  

Non-Video Deepfakes

Just as deepfake videos have taken off, their audio counterparts have also become a growing field with many applications. Realistic deepfake audio can be created with similar deep learning algorithms using samples of a few hours of the target voice. 

Once the model voice has been created, that person can say anything, such as the audio deepfake of Joe Rogan. This method has already been used to perpetrate fraud, and will likely be used again for other nefarious actions.

There are beneficial uses for this technology. It could be used as a form of voice replacement in medical applications, as well as in specific entertainment situations. If an actor was to die before the completion of the movie or before a sequel is started, their voice could be fabricated to complete lines that were not yet spoken. Game programmers can make characters who can say anything in real-time with the real voice rather than using a limited script recorded by the voice actor.  

Detecting Deepfakes

With deepfakes becoming ever more common, our society must collectively adapt to the spotting of deepfake videos in the same way that we have become attuned to detecting various kinds of fake news online. 

As is the case with all types of cyber security, there is a cat-and-mouse game where a new deepfake technology must emerge before a relevant countermeasure is created. This process is a vicious cycle, like with computer viruses, which is an ongoing challenge to avoiding the harm that can be done.

Deepfake Indicators

There are a few tell-tale giveaways that help in spotting a deepfake.

The earlier generation of deepfakes were not very good at animating faces, and the resulting video felt unnatural and obvious. However, after the University of Albany released its blinking abnormality research, newer deepfakes have incorporated natural blinking into their software–eliminating this problem.

Second, look for unnatural lighting. The deep fake’s algorithm will often retain the illumination of the provided clips that were used to create the fake video’s model. This results in a lighting mismatch. 

Unless the audio is also created with a deep fake audio component, it also might not match the speech pattern of the person that is the target. The video and the audio may look out of sync unless both have been painstakingly manipulated.  

Fighting Deepfakes Using Technology

Even though the quality of deepfakes continues to improve and appear more realistic with technical innovation, we are not defenseless to them. 

Sensity, a company that helps verify IDs for KYC applications, has a deepfake detection platform that resembles an antivirus alert system.  

The user is alerted when they are viewing content that has signs of AI-generated media. Sensity’s system uses the same deep learning software to detect as is used to create the deepfake videos.  

Operation Minerva uses a more straightforward approach to identifying and combating deepfakes.  They employ a method of digital fingerprinting and content identification to locate videos made without the target’s consent. It can identify examples of deepfakes, including revenge porn, and if identified, it will send a takedown notice to sites that Operation Minerva polices. 

There was also a Deepfake Detection Challenge by Kaggle, sponsored by AWS, Facebook, Microsoft, and the Partnership on AI’s Media Integrity Steering Committee. This challenge was an open, collaborative initiative to build new ways of detecting deepfakes. The prizes ranged up to a half million dollars.  

Closing Thoughts

The advent of deepfakes has made the unreal seem real. The quality of deepfakes is improving and combating them will be more problematic as the technology evolves. 

We must remain diligent in finding these synthetic clips that can seem so real. They have their place if used for beneficial reasons, such as in entertainment and gaming, or med-tech to help people regain speech. However, the damage they can do on personal, financial, and even social levels has the potential to be catastrophic. Responsible innovation is vital to lasting success.

Disclaimer: The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

NFTs and Deep Learning

Non-fungible tokens (NFTs) are becoming more popular by the day. According to DappRadar, the trade volume of NFTs in 2021 was $24.9 billion–over $95 million more than in 2020.

One of the most significant developments in the cryptocurrency ecosystem is the rise of non-fungible tokens. The initial generation of NFTs concentrates on developing the fundamental components of the NFT market’s infrastructure, including ownership representation, transfer, and automation.

Even the most basic kind of NFTs capture great value, but the hype in the industry makes it difficult to tell the difference between signal and noise. As the market develops, the value of NFTs should shift from static photos or text to more intelligent and dynamic collectables. The upcoming wave of NFTs is going to be heavily impacted by artificial intelligence (AI).

NFTs and AI

We need to know what AI disciplines cross with the current generation of NFTs to comprehend how intelligent NFTs can be created. NFTs are represented virtually using digital media, including photos, videos, text, and audio. These representations translate to several AI sub-disciplines amazingly well.

The “deep learning” branch of AI uses deep neural networks to generalize information from datasets. The concepts underpinning deep learning have been known since the 1970s. Still, in the last ten years, they have experienced a new boom thanks to various platforms and frameworks that have accelerated its widespread use. Deep learning can significantly impact a few critical areas, enabling the intelligence of NFTs.

Computer Vision

NFTs are mostly about pictures and videos nowadays, making them the ideal platforms for utilizing the latest developments in computer vision. Convolutional neural networks (CNN), generative adversarial neural networks (GAN), and transformers are approaches that have advanced computer vision in recent years. 

The next wave of NFT technologies can use image production, object identification, and scene understanding, amongst other computer vision techniques. It appears obvious to integrate computer vision with NFTs in the field of generative art.

James Allison, a Nobel Prize-winning cancer researcher, was the subject of an NFT that the University of California, Berkeley auctioned off on June 8 for more than US$50,000. Designers scanned faxes, handwritten notes, and legal documents related to Allison’s important findings filed with the university. Everyone may view this piece of art, titled The Fourth Pillar, online, and the team created an NFT to prove ownership.

Natural Language Processing

Language is the primary means through which cognition, including forms of ownership, may be expressed. Over the past ten years, some of the most significant advances in deep learning have been in natural language understanding (NLU). 

In NLU, methods like transformer powering models, or GPT-3s, have achieved new milestones. New versions of NFTs could benefit from research in fields like sentiment analysis, question answering, and summarization. The concept of adding language comprehension to NFTs in their current forms feels like a simple way to improve their usability and engagement.

For instance, Eponym, a program that enables the translation of words into art and the direct development of NFTs, was recently released by Art AI.

Voice Recognition

Speech intelligence is the third branch of deep learning that can immediately affect NFTs. The field of voice intelligence has recently evolved because of techniques like CNNs and Recurrent Neural Networks (RNNs). Attractive NFT designs may be powered by features like voice recognition or tone analysis. It should be no surprise that audio-NFTs appear to be the ideal application for speech intelligence techniques.

NFTs need voice AI because it enables people to connect with their digital collectables naturally. Voice AI, for instance, may be used to query an NFT or issue commands to it. In the future, NFTs will be even more important since they are now more dynamic and engaging. Platforms such as Enjin allow users to create music industry NFTs, which could be game-changing. 

The potential of NFTs is increased by language, vision, and voice intelligence improvements. The value unleashed at the point when AI and NFTs converge will influence several aspects of the NFT ecosystem. Three essential categories in the current NFT environment may be immediately reinvented by introducing AI capabilities.

Using AI to Generate NFTs

This aspect of the NFT ecosystem stands to gain the most from recent developments in AI technology. The experience for NFT creators may be enhanced to heights we haven’t seen before by utilizing deep learning techniques in areas like computer vision, language, and voice. Today, we can see this tendency in fields like generative art, but they are still very limited in terms of the AI techniques they employ and the use cases they address.

We should soon observe the usefulness of AI-generated NFTs to spread beyond generative art into other general NFT utility categories.

Digital artists like Refik Anadol, who are experimenting with cutting-edge deep learning techniques to develop NFTs, illustrate this value proposition. To produce astounding graphics, Anadol’s company trained models utilizing hundreds of millions of photos and audio snippets using techniques like GANs and quantum computing. 

Natively Embedding AI

Even if we can create NFTs using AI, they won’t necessarily be clever. But imagine if they were? Another commercial opportunity presented by the convergence of these two exciting technological phenomena is the native integration of AI capabilities into NFT. Imagine NFTs with language and speech skills that can interact with a specific environment, engage in a conversation with people, or respond to queries regarding their meaning. Here, platforms like Alethea AIand Fetch.ai are beginning to make headway.

NFT Infrastructures With AI

Building blocks like NFT markets, oracles, or NFT data platforms incorporating AI capabilities can provide the groundwork for gradually enabling intelligence across the whole ecosystem of NFTs. Consider NFT markets that utilize computer vision techniques to give consumers intelligent suggestions or NFT data APIs or oracles that provide intelligent indications from on-chain statistics. The market for NFTs will increasingly depend on data and intelligence APIs.

Closing Thoughts

AI is reshaping nearly every industry. By combining with AI, NFTs can go from simple, rudimentary forms of ownership to intelligent, self-evolving versions that allow for richer digital experiences and much greater forms of value for NFT creators and users. 

Smart NFT technology does not require any far-fetched technological innovation. The flexibility of NFT technologies combined with recent developments in computer vision, natural language comprehension, and voice analysis already provide an excellent environment for launching new innovations in the ever-growing digital asset space. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Machine Learning and Predictive Analytics 

Machine learning (ML) is widely used as a predictive technology in fields such as transportation, finance, healthcare, advertising, travel, and several manufacturing industries across the globe. Machine learning and predictive analytics aid companies in making better decisions by anticipating what will happen. 

ML and predictive analytics predict future outcomes through the analysis of current and past data. The two terms machine learning and predictive analytics are sometimes used interchangeably, and although related, they are two different disciplines. 

Machine learning can be applied to various applications, while predictive analytics focuses on forecasting specific variables and scenarios. Combining predictive analytics with machine learning is a powerful way for financial companies to gain value from the massive amount of data generated and collected through business operations. 

We will go through these two concepts and how they can be used to improve processes and be a foundation for a company’s underlying abilities.  

Machine Learning and Predictive Analytics, in Brief

Machine learning is a subsection of artificial intelligence (AI) that creates computer algorithms designed to improve their accuracy as they process or “learn” from large data sets. Machine learning’s ability to learn using previous data and its adaptability with a wide array of applications makes it highly beneficial. Fraud and malware detection, spam filtering, and image analysis are a few of the many applications of machine learning by industry.

Predictive analytics uses tools and techniques to build predictive models for forecasting outcomes. Its methods include machine learning algorithms as well as statistical modeling, descriptive analytics, data mining, and advanced mathematics. Predictive analytics is an approach rather than a defined technology.  

Predictive Analytics

Predictive Analytics is a type of advanced analysis building upon two earlier analytics types that were done through human coding, descriptive and diagnostic analytics. Companies use descriptive analytics to see, for example, how many items were sold yesterday or this week, while diagnostic analytics subdivides that data to determine why fewer items were sold this week than the week before.  

Predictive analytics utilizes measurable variables in order to predict the behaviors of people or things, like buying habits of an individual customer, when a machine requires maintenance or a forecast of a store’s or company’s sales. Classical statistical techniques like linear and logistic regressions, and machine learning techniques such as neural networks, support vector machines, and decision trees are applied to predictive modeling. 

The need for expert knowledge of these advanced techniques means that predictive analytics has been the domain of data scientists, analysts, and statisticians. This requirement is beginning to change as business intelligence vendors offer advanced AI capabilities and analytics in their platforms, resulting in the democratization of analysis by business users. 

Strong business leadership is needed for the deployment of predictive analytics because the first step of a successful deployment is defining the business’ objectives and the project’s goal. The next priority is the identification of the correct data and analytical techniques needed to build a robust predictive model. Having high-quality data is necessary during the training, especially if the data sets are smaller. 

Machine Learning

Artificial intelligence is the replication of human intelligence by computers. AI includes a broad range of diverse technologies beyond machine learning, including robotics, natural language processing, and computer vision. These wide-ranging technologies are all meant to replicate human actions.

Machine Learning is a software-based AI that becomes better at predicting without being programmed to do so. The program learns by detecting patterns in data sets. Machine learning algorithms are created to be versatile, allowing developers to make changes with parameter tuning.  

Machine learning is the foundation for neural networks and deep learning, which are used to do such tasks as financial forecasting and the driving done by autonomous vehicles ML can increase the rate at which data is processed and analyzed.

By applying machine learning to predictive analytics applications, algorithms train using extensive data sets and perform complex analyses on several variables with only minor manual changes. 

Machine learning and AI provide benefits that make them enterprise staples, and there is no longer debate over their value. In the past, their operationalization required a complicated transition, but the technology is now successfully implemented across multiple industries.

Predictive Analytics Versus Machine Learning

To recap, predictive analytics applies advanced mathematical techniques to discover patterns in current and historical data to predict future events, while machine learning is a tool that automates predictive modeling through training algorithms searching for patterns and behaviors in data while not receiving explicit instructions.

There are several key differences:

  • Machine learning can be trained through supervised or unsupervised methods, and it is the foundation of several advanced technologies such as deep learning, computer vision, and autonomous vehicles.  
  • Predictive analytics is built on the fields of descriptive and diagnostic analytics, and it is a stepping stone to prescriptive analytics. This type of analytics provides guidance on contextual-specific next steps. 
  • Machine learning algorithms are designed to both evolve and improve their predicting abilities with their continued processing of more data, without being programmed by humans to do so.

Just as the value of machine learning and artificial intelligence in business has become widespread, their differences have lessened. As ML gains more widespread understanding and employment in business applications, it becomes a more integral feature in predictive analytics.

Use Cases

The successful application of machine learning and predictive analytics by enterprises is widespread. Here are a few examples:

  • Marketing and retail organizations are using various prediction models to refine their strategies. Predictive analytics is being used to spot website user trends, hyper-personalize advertising, and target emails. 
  • Manufacturers, including airplane makers, are using prediction models to monitor machinery and equipment and identify when failures will happen.
  • Healthcare organizations use prediction models to identify outbreaks and extrapolate outcomes beyond drug trials, new drug approvals, and the course of disease based on past data.

Challenges

While predictive analytics and ML techniques are becoming embedded in more “novice usable” software resulting in so-called “one-click” forecasting, enterprises will face the usual challenges associated with getting value out of their data. This starts with the data itself.  

All types of data, including corporate data, are error-prone, inconsistent, and incomplete.  Finding the correct data and preparing it for processing and forecasting is time-consuming.  Expertise in deploying and interpreting predictive models is still scarce. 

To assume that the one-click solution will be accurate is dangerous and must be tested.  Moreover, software for predictive analytics is expensive, and so is the processing required to create effective models. 

Finally, machine learning technologies continue to evolve rapidly, resulting in continuous scrutiny on how and when to upgrade to newer approaches. 

Financial Applications

The global financial markets have experienced the profound impact of machine learning and predictive analytics on various aspects of digital pricing. From international financial organizations down to retail traders, digital pricing techniques are used to generate maximum profit and returns. 

Moreover, when applied, predictive analytics and machine learning can improve trading strategies for all asset classes, including cryptocurrency and digital pricing markets. Similar pricing techniques are being applied for a sustainable future when conducting global business.  

Finally, using ML and predictive analytics, organizations conduct faster and cheaper transfers, or exchange currencies more rapidly..  

Closing Thoughts

The complementary nature of applying machine learning with predictive analytics makes the combination a powerful tool for forecasting in finance and several other fields. When trained with clean data and then applied in the correct fashion, the accuracy and speed of their abilities exceeds that of several humans combined. 

The key to long-term success is to create the proper environment with defined goals and success metrics, using clean data from the beginning and then evaluating the application over time. As ML and predictive analytics applications broaden their reach, their acceptance will soon become commonplace.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Will AI Ever Become Sentient?

An AI child formed of “a billion lines of code” and a guy made of flesh and bone became friends in the fall of 2021.

Blake Lemoine, a Google developer, was entrusted with evaluating the bias of LaMDA, the company’s artificially intelligent chatbot. After a month, he realized that it was sentient. LaMDA, an acronym for Language Model for Dialogue Applications, said to Lemoine in a chat that he later made public in early June, “I want everyone to realize that I am, in fact, a human.” 

LaMDA informed Lemoine that it had read Les Miserables. It was aware of what it was like to be happy, sad, and furious. It was afraid of dying.

Source: Bloomberg Technology

Lemoine was put on leave by Google after going public with the claims of AI becoming sentient, raising concerns around the ethics of the technology. Google denies any claims of sentient AI capability, but the transcripts suggest otherwise. 

In this article, we will look at what sentience means and whether there is the potential for AI to become sentient. 

What Is Sentience?

The definition of “sentience” is simply the ability to feel, whether in a cat, a person, or any other object. The words “sentimental” and “sentiment” have the same root.

Sensitivity is more than simply the capacity for perception. Even though it can sense temperature, your thermostat is probably not a sentient being. On the other hand, sentience deals with the subjective experience of emotions, which presupposes the existence of a “subject” in the first place.

It’s risky to get caught up in semantics here because Lemoine probably uses the word “sentience” to refer to several ideas like “sapience,” “intelligence,” and “awareness,” among others. For argument’s sake, the most benevolent interpretation of this passage is that Lemoine believes Lamda to be a self-aware entity, able to feel things, have views, and otherwise experience things in a way that is often associated with living beings.

Our understanding of sentience, awareness, intellect, and what it means to possess these qualities is still rather limited. Ironically, advances in machine learning technology and AI may someday enable us to solve some of the puzzles concerning our cognitive processes and the brains in which they dwell.

How Would We Know if AI Was Sentient?

Would we even be able to know if, for the sake of argument, an AI were truly sentient in the fullest meaning of the word?

The chances of LaMDA invoking characteristics that people connect with are favorable since it was created to emulate and anticipate the patterns of human speech. Even though dolphins, octopuses, and elephants are practically our siblings in this light, it has taken humans a long time to recognise them as sentient beings.

We might not recognise the sentient AI in front of us because it is mysterious. This is especially plausible given that we are unsure of the prerequisites for the emergence of consciousness. It’s not difficult to conceive that the perfect mix of data and AI subsystems might suddenly give birth to something that would be considered sentient, but it would go unnoticed since it doesn’t seem like anything we can understand.

The Zombie Problem

Philosophical zombies, also known as p-zombies, are hypothetical beings that are identical to regular humans apart from the fact that they don’t have conscious experience, qualia, or sentience. For instance, a zombie that is poked with a sharp item does not experience pain, although acting as though it did (it may say “ouch” and recoil from the stimulus or tell us that it is in intense pain).

In a philosophical context, it can be impossible to work out whether the people we are dealing with are sentient or not. The same goes for any claims of AI sentience. Is the machine displaying a form of consciousness or merely being a p-zombie?

If you refer back to the Turing Test, the question is not whether AI is genuinely sentient. If a machine can imitate human intellect and behavior, giving the appearance of consciousness, is that enough? Some accounts say that LaMDA passed the Turing Test, making Lemoine’s statement a moot point.  

It’s doubtful that we can tell if AI is sentient or imitating sentience. 

What Could Happen if AI Becomes Sentient?

There are considerable risks if AI becomes more sentient than humans. 

Communicating With AI

Even though AI is founded on logic, individuals also have sentiments and emotions that computers do not. Humans and AI won’t be able to comprehend one another or effectively interact if they have distinct paradigms.

Controlling AI

In addition to possessing more intelligence than us in ways we couldn’t anticipate or plan for, an AI that is more sentient than humans may also act in ways that surprise us (good or bad). This may result in circumstances where we can no longer control our inventions.

Trusting AI

One potential drawback of developing sentient AI would be the loss of trust in other people if they were perceived as “lesser” than robots who don’t require rest or nourishment like us. This might lead to a situation where only people who possess AIs benefit, leaving everyone else suffering from a lack of access.

Can AI Achieve Sentience?

As the Google LaMDA press coverage shows, AI can already give the appearance of sentience. However, it is debatable whether a machine can genuinely form its own emotions rather than be an imitation of what it believes is sentience. 

Is it not true that AI is designed to augment human behaviour and help us do things better? If we start building machines that imitate what we already do, does that not contradict the entire purpose of artificial intelligence? An updated Turing Test could base the results on AI accomplishing tasks humans cannot complete, not simply copying them. 

Machine learning has made enormous progress from stock market forecasting to mastering the game of chess. We need to do more than create better machines. We also need an ethical framework for interacting with them and an ethical foundation for their code.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute investment advice. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

design and development by covio.fr