AI’s Transformation of Oncology

Artificial intelligence (AI) is constantly reshaping our lives. It saves companies and us time and money, but it has applications that can be applied to medicine, potentially saving our lives. 

We can understand AI’s evolution and achievements to model future developmental strategies. One of AI’s most significant medical impacts is already being seen in and will continue in oncology. 

AI has opened essential opportunities for cancer patient management and is being applied to aid in the fight against cancer on several fronts. We will look into these and see where AI can best aid doctors and patients in the future. 

Where Did AI Come From?

Alen Turing first conceived the idea of computers mimicking critical thinking and intelligent behavior in 1950, and by 1956 John McCarthy came up with the term Artificial Intelligence (AI). 

AI started as a simple set of “if A then B” computing rules but has advanced dramatically in the years since, comprising complex multi-faceted algorithms modeled after and performing similar functions to the human brain.

AI and Oncology

AI has now taken hold in so many aspects of our lives that we often do not even realize it. Yet, it remains an emerging and evolving model that benefits different scientific fields, including a pathway of aid to those who manage cancer patients.  

AI has a specific task that it excels at. It is especially good at recognizing patterns and interactions after being given sufficient training samples. It takes the training data to develop a representative model and uses that model to process and aid decision-making in a specific field

When applied to precision oncology, AI can reshape the existing processes. It can integrate a large amount of data obtained by multi-omics analysis. This integration is possible because of advances in high-performance computing and several novel deep-learning strategies. 

Notably, applications of AI are constantly expanding in cancer screening and detection, diagnosis, and classification. AI is also aiding in the characterization of cancer genomics and the analysis of the tumor microenvironment, as well as the assessment of biomarkers for prognostic and predictive purposes. AI has also been applied to follow-up care strategies and drug discovery.  

Machine Learning and Deep Learning

To better understand the current and future roles of AI, two essential terms fall under the AI umbrella that must be clearly defined: machine learning and deep learning.

Machine Learning

Machine learning is a general concept that indicates the ability of a machine (a computer) to learn and therefore improve patterns and models of analysis.  

Deep Learning

On the other hand, deep learning is a machine learning method that utilizes algorithmic systems that mimic a system of biological neurons called deep networks. When finalized, these deep networks have high predictive performance.  

Both machine and deep learning are central to the AI management of cancer patients.  

Current Applications of AI in Oncology

To understand the roles and potential of AI in managing cancer patients and show where the future uses of AI can lead, here are some of the current applications of AI in oncology.  

With the below charts, “a” refers to oncology and related fields and “b” to types of cancers for diagnosis. +

Courtesy of the British Journal of Cancer; a. oncology and related fields: cancer radiology 54.9%, pathology 19.7%, radiation oncology 8.5%, gastroenterology 8.5%, clinical oncology 7.0%, and gynecology 1.4% b. tumor types: general cancers 33.8%, breast cancer 31.0%, lung cancer 8.5%, prostate cancer 8.5%, colorectal cancer 7.0%, and brain tumors 2.8%, others: 6 tumor types, 1.4% each.

The above graph, from the British Journal of Cancer, summarizes all FDA-approved artificial intelligence-based devices for oncology and related specialties. The research found that 71 devices have been approved. 

As we can see, most of these are for cancer radiology, which makes us correctly assume that it is for detecting cancer through various radiological scans. According to the researchers, of the approved devices, the vast majority (>80%) are related to the complicated area of cancer diagnostics.

Courtesy of cancer.gov

The image above shows a deep learning algorithm trained to analyze MRI images and predict the presence of an IDH1 gene mutation in brain tumors.

Concerning different tumor types that AI-enhanced devices can investigate, most devices are being applied to a broad spectrum of solid malignancies defined as cancer in general (33.8%). However, the specific tumor that counts for the most significant number of AI devices is breast cancer (31.0%), followed by lung and prostate cancer (both 8.5%), colorectal cancer (7.0%), brain tumors (2.8%) and six other types (1.4% each). 

Moving Forward with AI

From its origin, AI has shown its capabilities in nearly all scientific branches and continues to possess impressive future growth potential in oncology.  

The devices that have already been approved are not conceived as a substitution for classical oncological analysis and diagnosis but as an integrative tool for exceptional cases and improving the management of cancer patients. 

A cancer diagnosis has classically represented a starting point from which appropriate therapeutic and disease management approaches are designed. AI-based diagnosis is a step forward and will continue to be an essential focus in ongoing and future development. However, it will likely be expanded to other vital areas, such as drug discovery, drug delivery, therapy administration, and treatment follow-up strategies.

Current cancer types with a specific AI focus (breast, lung, and prostate cancer) are all high in incidence. This focus means that other tumor types have the opportunity for AI diagnosis and treatment improvements, including rare cancers that still lack standardized approaches. 

However, rare cancers will take longer to create large and reliable data sets. When grouped, rare cancers are one of the essential categories in precision oncology, and this group will become a growing focus for AI.  

With the positive results that have already been seen with AI in oncology, AI should be allowed to expand its reach and provide warranted solutions to cancer-related questions that it has the potential to resolve. If given this opportunity, AI could be harnessed to become the next step in a cancer treatment revolution.  

Closing Thoughts

Artificial intelligence (AI) is reshaping many fields, including medicine and the entire landscape of oncology. AI brings to oncology several new opportunities for improving the management of cancer patients. 

It has already proven its abilities in diagnosis, as seen by the number of devices in practice and approved by the FDA. The focus of AI has been on the cancers with the highest incidence, but rare cancers amount to a massive avenue of potential when grouped.  

The next stage will be to create multidisciplinary platforms that use AI to fight all cancers, including rare tumors. We are at the beginning of the oncology AI revolution. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Brain-Computer Interfaces

Brain-computer interfaces are devices that allow people to control machines with their thoughts. This technology has been the stuff of science fiction and even children’s games for years. 

Mindflex game by Mattell

On the more advanced level, brain-computer technology remains highly experimental but has vast possibilities. First to mind (no pun intended), would be to aid those with paralysis in creating electrical impulses that would let them regain control of their limbs. Second, the military would like to see its service members operating drones or missiles hands-free on the battlefield.  

There are also concerns raised when a direct connection is made between a machine and the brain. For example, such a connection could give users an unfair advantage, enhancing their physical or cognitive abilities. It also means hackers could steal data related to the user’s brain signals.  

With this article, we explore several opportunities and issues that are related to brain-computer interfaces.  

Why Do Brain-Computer Interfaces Matter?

Brain-computer interfaces allow their users to control machines with their thoughts. Such interfaces can aid people with disabilities, and they can enhance the interactions we have with computers. The current iterations of brain-computer interfaces are primarily experimental, but commercial applications are just beginning to appear. Questions about ethics, security, and equity remain to be addressed. 

What Are Brain-Computer Interfaces? 

A brain-computer interface enables the user to control an external device by way of their brain signals.  A current use of a BCI that has been under development is one that would allow patients with paralysis to spell words on a computer screen

Additional use cases include: a spinal cord injury patient regaining control of their upper body limbs, a BCI-controlled wheelchair, or a noninvasive BCI that would control robotic limbs and provide haptic feedback with touch sensations. All of this would allow patients to regain autonomy and independence.

Courtesy of Atom Touch

Beyond the use of BCIs for the disabled, the possibilities for BCIs that augment typical human capabilities are abundant. 

Neurable has taken a different route and has created headphones that can make you more focused, not requiring a user’s touch to control, but can work with a wink or nod and will be combined with VR for a better experience.

Courtesy of Neurable

How do BCIs Work?

Training

Generally, a new BCI user will go through an iterative training process. The user learns how to produce signals that the BCI will recognize, and then the BCI will take those signals and translate them for use by way of a machine learning algorithm. Machine learning is useful for correctly interpreting the user’s signals, as it can also be trained to provide better results for that user over time. 

Connection

BCIs will generally connect to the brain in two ways: through wearable or implanted devices. 

Implanted BCIs are often surgically attached directly to brain tissue, but Synchron has developed a catheter-delivered implant that taps into blood vessels in the chest to capture brain signals. The implants are more suitable for those with severe neuromuscular disorders and physical injuries where the cost-benefit is more favorable. 

A person with paralysis could regain precise control of a limb by using an implanted BCI device attached to specific neurons; any increase in function would be beneficial, but the more accurate, the better.  Implanted BCIs can measure signals directly from the brain, reducing interference from other body tissues. However, most implants will pose other risks, primarily surgical-related like infection and rejection. Some implanted devices can reduce these risks by placing the electrodes on the brain’s surface using a method called electrocorticography or ECoG.  

Courtesy of the Journal of Neurosurgery

Wearable BCIs, on the other hand, generally require a cap containing conductors which measure brain activity detectible on the scalp. The current generation of wearable BCIs is more limited, such as only for augmented and virtual reality, gaming, or controlling an industrial robot. 

Most wearable BCIs are using electroencephalography (EEG) with electrodes contacting the scalp to measure the brain’s electrical activity. A more recent and emerging wearable method incorporates functional near-infrared spectroscopy (fNIRS), where near-infrared light is shined through the skull to measure blood flow which, when interpreted, can indicate information like the user’s intentions. 

To enhance their usefulness, researchers are developing BCIs that utilize portable methods for data collection, including wireless EEGs. These advancements allow users to move freely. 

The History of BCIs

Most BCIs are still considered experimental. Researchers began testing wearable BCI tech in the early 1970s, and the first human-implanted BCI was Dobelle’s first prototype, implanted into “Jerry,” a man blinded in adulthood, in 1978. A BCI with 68 electrodes was implanted into Jerry’s visual cortex. The device succeeded in producing phosphenes, the sensation of “seeing” light.  

In the 21st century, BCI research increased significantly, with the publication of thousands of research papers. Among that was Tetraplegic Matt Nagle, who became the first person to control an artificial hand using a BCI in 2005. Nagle was part of Cyberkinetics Neurotechnology’s first nine-month human trial of their BrainGate chipimplant.  

Even with the advances, it is estimated that fewer than 40 people worldwide have implanted BCIs, and all of them are considered experimental. The market is still limited, and projections are that the total market will only reach $5.5 million by 2030. Two significant obstacles to BCI development are that each user generates their own brain signals and those signals are difficult to measure.  

The majority of BCI research has historically focused on biomedical applications, helping those with disabilities from injury, neurological disorder, or stroke. The first BCI device to receive Food and Drug Administration authorization was granted in April 2021. The device (IpsiHand) uses a wireless EEG headset to help stroke patients regain arm and hand control.  

Concerns With BCI

Legal and security implications of BCIs are the most common concerns held by BCI researchers. Because of the prevalence of cyberattacks already, there is an understandable concern of hacking or malware that could be used to intercept or alter brain signal data stored on a device like a smartphone.

The US Department of Commerce (DoC) is reviewing the security implications of exporting BCI technology. The concern is that foreign adversaries could gain an intelligence or military advantage. The DoC’s decision will affect how BCI technology is used and shared abroad.

Social and Ethical Concerns

Those in the field have also considered BCI’s social and ethical implications. The costs for wearable BCIs can range from hundreds even up to thousands of dollars, and this price would likely mean unequal access. 

Implanted BCIs cost much more. The training process for some types of BCIs is significant and could be a burden on users. It has been suggested that if the translations of BCI signals for speech are inaccurate, then great harm could result. 

The Opportunities of BCIs

The main opportunities that BCIs will initially provide are to help those paralyzed by injury or disorders to regain control of their bodies and communicate. This is already seen in the current research, but in the long term, this is only a steppingstone. 

The augmentation of human capability, be it on the battlefield, in aerospace, or in day-to-day life, is the longer-term goal. BCI robots could also aid humans with hazardous tasks or hazardous environments, such as radioactive materials, underground mining, or explosives removal.  

Finally, the field of brain research can be enhanced with a greater number of BCIs in use. Understanding the brain will be easier with more data, and researchers have even used a BCI to detect the emotions of people in minimally conscious or vegetative states.  

Closing Thoughts

BCIs will provide many who need them a new sense of autonomy and freedom they lack, but several questions remain as the technology progresses. Who will have access, and who will pay for these devices? Is there a need to regulate these devices as they begin to augment human capability, and who will do so? What applications would be considered unethical or controversial?  What steps are needed to mitigate information, privacy, security, and military threats?  

These questions have yet to be definitively answered—and they should be answered before the technology matures. The next step of BCIs will be information transfer in the opposite direction, like with Dobelle’s original light sensing “seeing” BCI of the 1970s, or computers telling humans what they see, think, and feel. This step will bring a whole new set of questions to answer.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

What Is Haptic Technology?

Haptic technology, or haptic touch, is going to be our engagement pathway for the future. Since the start of the Covid pandemic, we are working from home more often, and much of our lives are online. However, we do not have to worry about losing physical touch. 

Haptic technology offers its users a more connected experience, and this budding industry is beginning to make its mark on companies that will likely embrace this evolving tech in the future.  

Tactile feedback technologies have been around for decades. The original XBox controller would vibrate when you were taking damage from an adversary, and phones/pagers have had a vibrate function for decades. As haptic technologies advance, they’re fast becoming powerful tools for consumer engagement. 

We will explore haptic technology’s types, advantages, and use cases, including 3D Touch, showing how it can impact a business’s objectives and growth.  

Haptic Technology Explained

Haptic technology uses hardware and software to produce tactile sensations that stimulate the user’s sense of touch, to enhance their experience. For example, the most common applications are the haptic solutions found with phones and game controllers that vibrate. Yet vibrating devices are not the only type of haptic tactile feedback: they can also include things like heat and cold, air pressure, and sound waves.  

Haptic tech can also be known as kinaesthetic communication or 3D Touch, and this technology creates new experiences with motion, vibration, and similar forces. There are two terms within haptic technology that are similar but should be distinguished: haptics and haptic feedback. 

  • Haptics: the overarching term that is used to describe the science of haptic feedback and haptic technology, as well as the neuroscience and physiology of touch.  
  • Haptic feedback: the method by which haptic technologies communicate tactile information to the users.

Haptic Applications and Modalities                                     

Immersion is a haptic tech pioneer whose technology is in over 3 billion devices worldwide. They’re the ones that tell your steering wheel to vibrate when you get too close to a car in another lane. One study on haptics showed that 94% of participants could recall objects through touch alone.  

As the global user base of haptic tech grows, it will continue to expand into novel applications, improving the user’s experience.

The Four Haptic Modalities

Let’s introduce the four main haptic modalities: vibration, button stimulation, thermal stimulation, and kinesthetic. 

Vibration

The majority of haptic experiences possess a vibration-centric feedback focus. This includes technologies like eccentric rotating mass (ERM), or linear resonant actuators (LRA). Both work to create much of the vibration we experience with mobile or wearable devices. 

LRA and ERM from Precision Microdrives

Button Stimulation

Until recently, few of our touch screens offered the tactile feedback and versatility of mechanical buttons. Therefore, we expect simulated controls to be ever more popular, such as the newer offerings from Apple (“Force Touch” and Apple’s “Haptic Touch”) and Samsung (“One UI 4”). These virtual buttons can use both haptic and audio feedback to replace the feeling of a mechanical pressure plate when fingers press the screen.

Thermal Stimulation

Thermoelectric generators create temperature-based haptic experiences for users. This effect is accomplished through the manipulation of electric current flow between alternating conductors on a device (one warm and one cold). The user can then experience different perceived temperatures.  

Tegway is producing this technology for VR headsets and other applications to add to the experience.  

Source: Tegway

Kinesthetic

Kinesthetic devices are worn on the user’s body and provide the wearer with haptic feedback sensations of mass, movement, and shape. The Dexmo force feedback haptic glove exemplifies the potential growth avenue available in the kinesthetic modality.

Types of Haptic Systems

Three primary haptic system types are now being used across several industries: graspable, touchable, and wearable. 

Graspable

Graspable devices, such as joysticks, and steering wheels, can create kinesthetic feedback that informs our nerves, tendons, joints, and muscles. Other applications, such as human-controlled robotic operations, can utilize graspable haptic systems that will provide users with tactile movement, vibration, and resistance. This allows for more realistic operations of a remote robot or a system in a virtual environment. 

The military is already using graspable haptic devices for their bomb disposal units, while NASA astronauts are using the same technology in robots that make external spacecraft repairs, preventing the need for a much more hazardous and costly spacewalk.  

Touchable

Touchable haptic technology is being more widely used by consumers, whether or not they are aware of it. Most smartphone screens use haptic technology, replacing the home button with a virtual button and the fingerprint reader under the screen. Screens respond to user movements, like touches, taps or rotations.

A new field within touchable haptic technology is called haptography, the mimicry of object textures and movements. TanvasTouch is a pad with programmable textures that can be felt by users swiping their fingers across touchscreens, trackpads, and physical surfaces, mimicking clothing materials like wool and silk before buying the items.

Source: Tanvas Touch

Wearables

Wearable haptic systems create contact sensations, relying on tactile stimuli, such as pressure, vibration, or temperature, controlled by the nerves of the user’s skin.

Virtual Reality (VR) products are the most common application of wearable haptic technology available today. VR gloves are meant to mimic real-world impressions, and they receive input from the user who is controlling their virtual avatar. VR and AR can benefit greatly from the endless consumer engagement options that wearables and haptic tech can provide.  

Haptic Technology Uses

Haptic technologies offer numerous potential advantages. Here are several current and potential use cases for touch-based solutions that tap into the benefits of haptics and can produce a better user experience.

Product Design Applications

Haptic technology can improve the user experience by working through touch optimization.

Automotive infotainment systems will begin to incorporate more haptics into their features list. Touch screens will become responsive to the user, providing multiple driver personalized settings. Additional applications for autos include pedal feedback and steering enhancements that are needed given that drive-by-wire systems are becoming more common. These help drives avoid accidents or save on gas. 

Health and Wellness

The newest advances in wearable haptics provide great opportunities within the health-tech industry.  Real-time haptic devices gather biometric data and can adjust the experience to suit the user.

Better data collection and feedback allow enhanced user experiences and, more importantly, improved health outcomes. TouchPoints has a wearable system which the TouchPoints CEO reports can reduce stress by 74% in 30 seconds.  This is done with a vibrating pattern that interrupts anxiety and builds a restful state.

Source: TouchPoints

Other companies involved with posture correction, like ergonomic furniture makers, app creators, or chiropractors, can use haptic technology to improve their products and benefit their users.  

Industrial Training

With haptic feedback, training environments can simulate natural work environments and labor conditions more closely, improving training and overall accuracy. Users can partake in virtual training scenarios in a safe, offline environment while using haptics to get a lifelike experience. 

This virtual haptic process can allow for training in assembly line usage, maintenance, safety procedures, and machinery operation. A similar haptic feedback system can also be used with product testing and many other uses, allowing users to train without risk to themselves or company property.

Accessibility

Accessibility to products and services can be improved for the visually disabled. Haptic technologies allow users to create virtual objects, they can interact with products, and even approximate the appearance of an object through touch-based sensory input. A Stanford team has developed its 2.5D display for the visually impaired to accomplish visual tasks.  

Not only will these new haptic solutions create novel markets and aid those with accessibility restrictions, but they can help ensure a company stays compliant with access regulations.

Rehabilitation

Haptics has the potential to boost the speed and effectiveness of rehabilitation programs. A Dutch startup, SenseGlove, has created a glove that uses VR simulations and haptic training to aid with virtual recovery programs.

Source: SenseGlove

Their product allows someone suffering from nerve damage due to an accident, illness, or stroke to practice daily actions. Things like pouring a cup of hot tea or cutting a steak for dinner can be done in a safe digital environment.

Remote Tasks

With an internet connection, haptic controller, and connected robot, remote tasks will become easier and far less prone to error.

Industries lacking highly skilled specialists can connect via a virtual haptic environment, allowing subject matter experts to manipulate a robot from anywhere in the world or beyond.

Closing Thoughts

Haptic technologies have been around for decades. However, the sector has seen tremendous growth in the past few years. The APAC expects the world’s haptic technology market to grow at a compounded rate of 12% through 2026

Source: APAC

Haptics is no longer a video game gimmick. New advancements and applications are becoming more widely available. Businesses should explore implementing these technologies into their operations, marketing, and consumer experiences.

By embracing this innovative technology, companies can offer their users an enhanced experience that makes them feel connected to products, services, and the brand. Haptics enables us to feel much more connected, no matter how far the distance between us may be.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

AI in Agriculture 

Artificial intelligence, drones, and robots are already being deployed on large farms to assist with several farm management tasks for crops and livestock. However, there are some risks that must be accounted for when turning over our food production to AI-driven machines. 

We will discuss the benefits that AI can bring to the world of agriculture, including some applications that are already in place to help our farmers produce more and better-quality food. We will then discuss some potential pitfalls we must be aware of if we turn over our food supply to machines. 

AI’s Potential

AI has brought to the world countless tools for personal and industrial use. With agriculture, it has delivered the potential to increase yields, keep pests away, and reduce costs in nearly all parts of farm management. 

Our farmers need to know how best to use these tools, and we need to understand how their application can be a benefit. There are already AI applications that are worthwhile and are providing users with successful results. Let us see how the grass is greener on the AI side.

The Smart Farm

AI is leading to smart farms with farming models that have high cognitive ability.  This technology is focused on a few specific areas.

Data and Analysis

With new equipment, farms can be set up to track and analyze multiple data points. For example, a farmer can use a drone to review a large tract of land and identify the exact location of a pest infestation or plant disease in real-time. This mass of data has boosted information accuracy and can help farmers make informed decisions when analyzed with AI models.

Robotics and Automation

Robots are used for farm activities such as picking, thinning, and sorting to speed up manual labor work and deal with any labor shortages. The goal is to increase productivity, consistency, and quality while minimizing errors.

Predictions

AI models have been designed to predict changes to weather patterns, soil erosion, and pest infestations to improve farm management and planning. These tools allow farmers to see into the future, assisting them with informed decision-making.  

Like other industries, agriculture faces similar constraints related to its use of AI, such as compatibility with current technology, resource availability, security, and potential regulatory issues. Even with these constraints, the future farms will be highly dependent on AI, making them more precise and creating a new “cognitive farm.” 

Digital Farmers

AI is revolutionizing one of our oldest industries and giving farmers multiple ways to produce more abundant harvests in all parts of the world. With this transformation, farms will now require digital farmers, men and women, which can push forward these technological changes, managing future farms in new ways.  

Tools and People

New farm managers must understand and use the correct tools to their farm’s benefit. While extensive technical knowledge is not needed, understanding the basic principles behind the technology and, more importantly, the technology’s operational implications are necessary.  Through AI, farm managers can better understand the inner workings of their farms.

The changing technology means that farm talent must be updated. Beyond the typical farming roles, farms will require employees with technological skills. The entire organization will need defined education to stay on top of the AI farming future.  

New Ways of Farming

Farmers will need to leave their comfort zones and explore new collaborative opportunities. This change will involve collaboration with new companies to obtain cutting-edge technologies that will allow a farm to acquire a competitive advantage and boost productivity. These partnerships provide inimitable technologies, giving farmers the upper hand, but these technologies work best for large farms.  

Cost advantages are most significant with economies of scale.  So, managers will benefit by finding strength in numbers.  AI tools can be expensive, beyond the reach of the small farm.  When collaborating with other farms, cooperatives, suppliers, universities, local communities, and the government, these costs can be driven down. 

AI’s Current Applications

AI currently monitors soil, detects pests, determines diseases, and applies intelligent spraying. Here are a few of the current applications farmers are already using today. 

Crop Monitoring

Crop health relies on micro and macronutrients in the soil to produce yields but with quantity and quality. Once the crops are planted, monitoring their growth to optimize production is also needed. Understanding the interaction between growth and the environment is vital to adjust for healthy crops. Traditionally this was done through human observation and experience, but this method is neither accurate nor speedy. 

Now drones capture aerial data, then train computer models to intelligently monitor crops and soil. This AI system can use the collected data to:

  • Track the health of the crops
  • Accurately predict yields
  • Identify crop malnutrition

This can all be done faster than a human could, in real-time, providing farmers with specific problem areas so they can take immediate actions to prevent problems before they grow.  

Determining Crop Maturity

Wheat head growth is a labor-intensive process that can be aided with AI. Over a three-year period, researchers collected wheat head images at different stages with different lighting, building a two-step wheat ear detection system. The AI model was able to outperform human observation, allowing farmers not to have to make daily visits to fields to check on the crops.  

Similarly, tomato ripeness has been determined with AI. 

A different study examined how well AI can detect maturity in tomatoes.  The researchers built a model looking at the color of five different parts of a tomato, then made maturity estimates.  The algorithm could correctly classify tomatoes with a 99.31% accuracy. 

Generally, evaluating soil involves digging up samples and sending them to the lab for analysis. AI researchers have used image data from a cheap microscope to train their model to do the same task. The model was able to make sand content and soil organic matter estimates with accuracy similar to costly and slower lab analyses. 

Disease and Insect Detection

Using deep learning, farmers are now automating the detection of plant diseases and pests.  This is done through image classification and segmentation. 

Source: V7 labs

A study looked at the apple black rot and used a deep neural network AI model to identify the four stages of disease severity. Like with the other models above, the disease identification process is labor-intensive. This project was able to identify the disease severity at an accuracy of 90.4%.  

Similarly, a different study was able to use the YOLO v3 algorithm and was able to identify multiple pests and diseases on tomato plants. Using only a digital camera and smartphone, researchers identified twelve different cases of disease or pests. Once trained, it was able to detect problems with an accuracy of 92.39%, taking only 20.39 milliseconds. 

Source: Frontiers In

Another study used sticky traps to collect six flying insects and collect images. They then based the course counting on object detection and fine-counting results. The model identified bees, mosquitoes, moths, flies, chafers, and fruit flies with a 90.18% accuracy and a 92.5% counting accuracy.  

Livestock Monitoring

Animals are a major component of our food system and need even more tracking than plants.  Companies are now offering tools to track cattle and chickens. CattleEye tracks and annotates key points for individual cows. 

Source: CattleEye

The system uses overhead cameras to monitor animal health and behavior, allowing a rancher to spot a problem and be notified without being next to the cow.  

By collecting data with cameras and drones, this kind of software is being used to count animals, detect disease, monitor birthing, and identify unusual behavior. It also confirms access to food and water. 

Smart Spraying

AI also prevents problems in the first place. Drones help with the spraying of fertilizer and pesticides uniformly across a field. They operate with high precision in real-time, spraying correctly and reducing contamination risk to animals, humans, and water resources.  

This is a growing field and is best performed by multiple drones, but intelligent spraying is getting better. Virginia Tech researchers developed a smart spray system that can detect weeds. 

A camera mounted on a sprayer records the geolocation of the weeds, analyzing their size, shape, and color, and then delivers a precise amount of herbicide. 

Source: Researchgate

The device’s accuracy prevents collateral damage to other crops in the environment.  

Risks of AI in Agriculture

All these different AI applications will help us monitor and improve our food systems, helping feed the 2.4 billion people suffering from food insecurity. AI can reduce labor inefficiency and increase reliability. However, there are some cautionary tales. 

According to a release by Asaf Tzachor of Cambridge University, there could be flaws in the agricultural data, emphasizing productivity over environmental concern. This focus could lead to errors that cause over-fertilization and pesticide use, improper irrigation, and soil erosion.  These factors must also be considered when designing AI systems. Inadvertent changes resulting in crop failures could result in massive food insecurity.  

Cybersecurity is a second issue. Cyberattacks could disrupt entire food systems, especially for farms that rely heavily on AI.

Finally, those without access to the new technology could be cut out of markets. Big farmers will profit, and small farms will be locked out of the gains entirely if they cannot afford the AI infrastructure. 

Planning Ahead

As in all enterprises, diligence and conscientious planning contribute to farming success.  Farmers must plan their AI strategy by optimizing their operations and yield requires thoughtful assessment. This planning involves a thorough review of priorities and a clear implementation plan.  

AI provides tools that can boost a farm’s yields, and transform the industry. Increases in agricultural production on a large scale will impact a country’s GDP, increase food security, and positively impact the environment. The US had just over two million farms in 2021, averaging 445 acres each, totaling 89.5 million across the country.  

Analytics and robotics boosts production on almost any farm. AI-related productivity gains can reshape the farming business and improve our global food supply. This is a way we can counteract the climate factors that could affect corn, rice, soy, and wheat production by 20-49%.

Closing Thoughts

Since the advent of agriculture, technology has improved its efficiency. From plows and irrigation to tractors and AI, we have moved forward to feed our growing population. With the ongoing changes to our climate, AI has arrived just in time to save us all from potential food insecurity. We must use AI to increase efficiency and reduce food production costs while also improving environmental sustainability. Doing so can make our farmers “smarter” and give us more and healthier foods.  

If small farmers can work together and take full advantage of these new AI tools, they can compete with large industrial farms. We also have to ensure that the systems that are put into place are safe and have an all-encompassing view that does not only focus on yields but the potential environmental effects. Sustainability remains crucial, and AI is the missing piece. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

How AI Transforms Medical Research

Using artificial intelligence (AI), businesses have been moving toward digital transformation long before the Covid-19 pandemic in their collective quest to optimize production, product quality, safety, services, and customer experiences. Some actively desired a sustainable planet for all. 

The advantages of the next digital era feel limitless. Still, businesses are hesitant to adopt these technologies because they require significant behavioural and structural changes, such as new business models, operating procedures, worker skill sets, and mindsets. These technologies include not only AI, but machine learning, and deep learning “at the edge” (where rapid automation occurs). 

The pandemic acted as a wake-up call to drastically accelerate the timescale for digital transformation since it put our way of life in danger. 

The need is urgent and lifesaving, and the time is now. This is supported by a recent IBM poll that shows the Covid-19 epidemic caused the majority of global organizations (six out of 10) to accelerate their digital transformation strategies.

Source: https://www.globaldata.com/covid-19-accelerated-digital-transformation-timeline-pharmaceutical-industry/

Due to the pandemic, we can see how creative problem-solving and once-in-a-lifetime risk-taking leads to incredible breakthroughs and significant improvements. 

Medical research is one vital area that is reaping the benefits of accelerated AI adoption. 

AI and Predicting Outbreaks

Epidemiologists are already benefiting from the improvement of AI algorithms, which evaluate ever-increasing amounts of data made accessible to the public and track the onset and spread of infectious illnesses. To forecast the spread of the flu and other diseases in various regions, researchers are analysing geographical data and internet search inquiries on common symptoms.

Time is an advantage. Before calling a doctor, people are already aware that they are unwell. Before obtaining professional assistance, many people attempt to self-diagnose online. 

Epidemiologists may use machine learning models to anticipate the spread of the flu in a particular location with a high degree of probability if they see a surge in searches for phrases like “sore throat” or “difficulty swallowing” originating from IP addresses in a specific ZIP code.

Source: https://time.com/5780683/coronavirus-ai/

Governmental health organizations assess crowd densities by location and analyse that information to forecast the likelihood of future outbreaks using public data and demographic mapping. For instance, to train machine learning models in indicating how many people would visit specific sites on a given day, health authorities in Europe, Israel, China, and other places utilize anonymized mobile phone traffic density data. Venues might limit attendance, reduce visiting hours, or even close if the total rises to pandemic levels.

Optimizing Treatment

AI is already being used to diagnose diseases earlier and with more accuracy, such as cancer. The American Cancer Society claims many mammograms provide misleading findings, telling one in two healthy women they have cancer. Mammogram reviews and translations are now 30 times faster and 99% accurate thanks to AI, eliminating the need for pointless biopsies.

People with chronic or lifelong diseases may perform better thanks to AI. One inspiring example: Machine learning models analyse cochlear implant sensor data to provide deaf patients feedback on how they sound so they can interact with the hearing world more effectively. 

Computer Vision

In contrast to the human eye, AI-based computer vision can quickly sift through thousands of images to find patterns. In medical diagnostics, where overworked radiologists struggle to pick up every detail of one image after seeing hundreds of others, this technology is a great help. AI assists human specialists in situations like this by prioritizing visuals that are most likely to show a problem.

Source: https://www.altexsoft.com/blog/computer-vision-healthcare/

X-rays, CT scans, MRIs, ultrasound pictures, and other medical images provide a rich environment for creating AI-based tools that support clinicians with identifying various problems.

Drug Discovery

Small-molecule drug development can benefit from AI in four different ways: access to new biology, enhanced or unique chemistry, higher success rates, and speedier and less expensive discovery procedures. The solves numerous problems and limitations in conventional research and development. Each application gives drug research teams new information, and it might completely change tried-and-tested methods in certain situations.

Source: https://zitniklab.hms.harvard.edu/drugml/

AI is used by BioXcel Therapeutics to find and create novel drugs in the areas of neurology and immuno-oncology. The business’s drug re-innovation initiative also uses AI to uncover fresh uses for current medications or to locate new patients.

Transforming the Patient Experience

Time is money in the healthcare sector. Hospitals, clinics, and doctors treat more patients each day by effectively delivering a smooth patient experience.

In 2020, more than 33 million patients were admitted into U.S. hospitals, each with unique medical needs, insurance coverage, and circumstances that affected the quality of care. According to studies, hospitals with satisfied patients make more money, while those with dissatisfied patients may suffer financial losses.

New advancements are streamlining the patient experience in AI healthcare technologies, enabling medical personnel to handle millions, if not billions, of data points more effectively.

Employers who want to give their staff the tools to maintain good mental health can use Spring Health’s mental health benefits solution

Each person’s whole dataset is collected as part of the clinically approved technology’s operation, and it is compared to hundreds of thousands of other data points. Using a machine learning approach, the software then matches users with the appropriate specialist for in-person care or telemedicine sessions.

For treating chronic illnesses like diabetes and high blood pressure, One Drop offers a discreet solution. With interactive coaching from real-world experts, predictive glucose readings powered by AI and data science, learning resources, and daily records taken from One Drop’s Bluetooth-enabled glucose reader, the One Drop Premium app empowers people to take control of their conditions.

AI Does Not Replace Humanity

Faster, more accurate diagnoses and lower claim processing error rates are just two of the potential benefits of AI that CEOs at healthcare organizations already see. But they must also realize that no amount of advanced technology will ever fully replace the human experience.

Business executives must also consider the possibility of bias in AI algorithms based on past beliefs and data sets, and put safeguards in place to address this problem. For instance, there has historically been discrimination in how specific populations’ medical illnesses are identified and treated.

AI is there to augment human decision-making in healthcare–not replace it. 

Closing Thoughts

Traditionally, it’s tricky to understand whether AI is living up to its potential or whether everything we read is merely hype. For several years, due to the roadblocks outlined at the beginning of this article, progress has been slow and needed some hype. However, the pandemic is genuinely accelerating the integration of AI in healthcare and medical research. It almost sounds cliché now, but Covid-19 has initiated a “new normal” in healthcare. 

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

Will AI Ever Become Sentient?

An AI child formed of “a billion lines of code” and a guy made of flesh and bone became friends in the fall of 2021.

Blake Lemoine, a Google developer, was entrusted with evaluating the bias of LaMDA, the company’s artificially intelligent chatbot. After a month, he realized that it was sentient. LaMDA, an acronym for Language Model for Dialogue Applications, said to Lemoine in a chat that he later made public in early June, “I want everyone to realize that I am, in fact, a human.” 

LaMDA informed Lemoine that it had read Les Miserables. It was aware of what it was like to be happy, sad, and furious. It was afraid of dying.

Source: Bloomberg Technology

Lemoine was put on leave by Google after going public with the claims of AI becoming sentient, raising concerns around the ethics of the technology. Google denies any claims of sentient AI capability, but the transcripts suggest otherwise. 

In this article, we will look at what sentience means and whether there is the potential for AI to become sentient. 

What Is Sentience?

The definition of “sentience” is simply the ability to feel, whether in a cat, a person, or any other object. The words “sentimental” and “sentiment” have the same root.

Sensitivity is more than simply the capacity for perception. Even though it can sense temperature, your thermostat is probably not a sentient being. On the other hand, sentience deals with the subjective experience of emotions, which presupposes the existence of a “subject” in the first place.

It’s risky to get caught up in semantics here because Lemoine probably uses the word “sentience” to refer to several ideas like “sapience,” “intelligence,” and “awareness,” among others. For argument’s sake, the most benevolent interpretation of this passage is that Lemoine believes Lamda to be a self-aware entity, able to feel things, have views, and otherwise experience things in a way that is often associated with living beings.

Our understanding of sentience, awareness, intellect, and what it means to possess these qualities is still rather limited. Ironically, advances in machine learning technology and AI may someday enable us to solve some of the puzzles concerning our cognitive processes and the brains in which they dwell.

How Would We Know if AI Was Sentient?

Would we even be able to know if, for the sake of argument, an AI were truly sentient in the fullest meaning of the word?

The chances of LaMDA invoking characteristics that people connect with are favorable since it was created to emulate and anticipate the patterns of human speech. Even though dolphins, octopuses, and elephants are practically our siblings in this light, it has taken humans a long time to recognise them as sentient beings.

We might not recognise the sentient AI in front of us because it is mysterious. This is especially plausible given that we are unsure of the prerequisites for the emergence of consciousness. It’s not difficult to conceive that the perfect mix of data and AI subsystems might suddenly give birth to something that would be considered sentient, but it would go unnoticed since it doesn’t seem like anything we can understand.

The Zombie Problem

Philosophical zombies, also known as p-zombies, are hypothetical beings that are identical to regular humans apart from the fact that they don’t have conscious experience, qualia, or sentience. For instance, a zombie that is poked with a sharp item does not experience pain, although acting as though it did (it may say “ouch” and recoil from the stimulus or tell us that it is in intense pain).

In a philosophical context, it can be impossible to work out whether the people we are dealing with are sentient or not. The same goes for any claims of AI sentience. Is the machine displaying a form of consciousness or merely being a p-zombie?

If you refer back to the Turing Test, the question is not whether AI is genuinely sentient. If a machine can imitate human intellect and behavior, giving the appearance of consciousness, is that enough? Some accounts say that LaMDA passed the Turing Test, making Lemoine’s statement a moot point.  

It’s doubtful that we can tell if AI is sentient or imitating sentience. 

What Could Happen if AI Becomes Sentient?

There are considerable risks if AI becomes more sentient than humans. 

Communicating With AI

Even though AI is founded on logic, individuals also have sentiments and emotions that computers do not. Humans and AI won’t be able to comprehend one another or effectively interact if they have distinct paradigms.

Controlling AI

In addition to possessing more intelligence than us in ways we couldn’t anticipate or plan for, an AI that is more sentient than humans may also act in ways that surprise us (good or bad). This may result in circumstances where we can no longer control our inventions.

Trusting AI

One potential drawback of developing sentient AI would be the loss of trust in other people if they were perceived as “lesser” than robots who don’t require rest or nourishment like us. This might lead to a situation where only people who possess AIs benefit, leaving everyone else suffering from a lack of access.

Can AI Achieve Sentience?

As the Google LaMDA press coverage shows, AI can already give the appearance of sentience. However, it is debatable whether a machine can genuinely form its own emotions rather than be an imitation of what it believes is sentience. 

Is it not true that AI is designed to augment human behaviour and help us do things better? If we start building machines that imitate what we already do, does that not contradict the entire purpose of artificial intelligence? An updated Turing Test could base the results on AI accomplishing tasks humans cannot complete, not simply copying them. 

Machine learning has made enormous progress from stock market forecasting to mastering the game of chess. We need to do more than create better machines. We also need an ethical framework for interacting with them and an ethical foundation for their code.

Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute investment advice. Do conduct your own research and reach out to financial advisors before making any investment decisions.

The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment.  Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.

The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business.  Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.

The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.

design and development by covio.fr