This audio is automatically generated. feedback.
The potential for artificial intelligence in biotechnology and pharmaceutical industries is huge. Drug Discovery From enrolling patients to designing clinical trials. But the industry is also working towards the future. Be careful,and Drawbacks of the technology This is essential to making it a useful tool in the long run.
Hallucinations are one of the challenges in widespread use of AI and machine learning. In healthcare, feeding false information into algorithms could be particularly harmful, says Wael Salloum, co-founder and chief scientific officer at natural language processing company Mendel AI.
Currently, large-scale language models are being used to decipher patient records and guide doctors and other caregivers to appropriate interventions, including medications. For example, if a patient is taking one medication, the LLM won’t recommend a second treatment that could lead to complications. But when that output is designed to be trustworthy, misinformation is dangerous, Salloum said.
“The goal of these LLM programs is to produce really good English and convince people that it’s true. When doctors are summarizing medical records to make decisions, potential mistakes can be a matter of life and death,” Salloum said.
Hallucinations Defined The more general LLM company ChatGPT defines it as “the generation of content that is not based on real or existing data, but is generated by extrapolation or creative interpretation of training data by machine learning models.” Many of these are small in nature, but could become big problems in the future. According to IBM: — from spreading misinformation to bad actors planning cyber attacks.
Mendel’s platform, Hypercube, combines LLM and logic-based AI to reduce the incidence of hallucinations in large-scale medical studies. Participating this month The partnership with Google Cloud will allow pharmaceutical companies and other healthcare companies to access the platform through the tech giant’s marketplace.
As AI becomes a game changer in the field of life sciences, Salloum hopes that platforms like Hypercube can increase trust between AI and the researchers, scientists, and physicians who use it.
“It’s really important that everything the system produces is explainable and traceable. Many of the doctors we spoke to said they spend more time reviewing summaries of original notes than they do reading the notes themselves,” Salloum said. “Over-hyping AI could damage the entire industry, because once trust is lost, it can never be regained.”
Building trustworthy AI
The inclusion in the Google Cloud Marketplace is no coincidence. In fact, Hypercube works a lot like the Google search engine that’s so prevalent in many people’s online lives. Every time a user searches for something, the engine has already built an internal reference index, rather than combing the entire internet, Salloum says.
Similarly, the Hypercube will build a knowledge base of millions of patient records, tracing them back to the original records, reducing the risk of hallucinations. In this sense, the Hypercube is more of a “research engine” than a search engine, Salloum says.
“The goal of these master’s in law programs is to produce really good English and convince people that it’s true. When doctors are summarizing medical records to make decisions, potential mistakes can be a matter of life and death.”
Wael Saloum
Co-founder and Chief Scientific Officer of Mendel AI
Like many AI applications, Hypercube works better when used as a tool.
“We’re there to find literature to review, but at the end of the day, human intelligence is needed to take the lead. Technology and AI can be used to support humans, but they can’t be used to replace humans,” Salloum said. “We’re still a long way from that, and it’s important not to overstate things.”
Many of Mendel’s clients are companies with raw data that can be used for scientific research, Mr. Salloum said, and Mendel also helps pharmaceutical companies link their data with patient populations.
Salloum said the hypercube also incorporates a “world model,” a fundamental definition in medicine that is almost philosophical in nature.
“What is medicine? What is therapy? What is cancer? What is cell differentiation? All of this is condensed into a master’s in law, and you’re taught that there are certain structures that are fine-tuned to the task,” Salloum says. “So the system itself creates and justifies it.”
The larger goal of AI in healthcare is to provide access to information that may not be accessible otherwise, Salloum said, and making that information more reliable by mitigating the presence of hallucinations is at the heart of that access.
“Our mission is to democratize healthcare by creating a single source of medical knowledge that understands everything we learn from a patient’s journey, from successes to failures,” Salloum says, “and the applications are ultimately limitless.”