Large Language Models (LLM) are having a moment
The world has woken up to the power of AI models such as ChatGPT and it seems like we all can't get enough of it these days. For those of you who don’t know, ChatGPT is an example of a Large Language Model.
Large language models are neural network-based models that have been trained on vast amounts of text data to generate natural-sounding text in a variety of styles and formats. These models can be used for a wide range of tasks, including language translation, question answering, text summarization, and more. Some examples of large language models include GPT-3, BERT, and T5.
We at Wisecube have just started exploring what this means for us, but it’s pretty clear that wonderful things are possible. Wisecube does something very different from ChatGPT, in a very different way. But they have a common interface: natural language. And this means that ChatGPT can “talk to” Wisecube Knowledge Graphs just like humans do—with Wisecube turning the natural language it gets from ChatGPT into precise, symbolic computational language on which it can apply its computational knowledge power on top of our powerful knowledge graph engine.
The shortcomings of Large Language Models
But while Large Language models are remarkable in automating the doing of major human-like things, not everything that’s useful to do is quite so “human-like”. Some of it is instead more formal and structured. And indeed one of the great achievements of our civilization over the past several centuries has been to build up the paradigms of mathematics, the exact sciences—and, most importantly, now computation—and to create a tower of capabilities quite different from what pure human-like thinking can achieve.
The other deficiency of ChatGPT like models is that they are not able to support factual queries with hard-evidence from up-to-date and verifiable information. This means that often they can ‘hallucinate’ answers that while sound reasonable could be way off or outdated. This may not be a problem for normal day-to-day questions but could prove disastrous for mission critical domains like healthcare, biomedicine, legal and other areas.
Knowledge Graphs to the rescue
This is where Knowledge graphs can really help. As a refresher, A knowledge graph is a data model that represents entities and their relationships in a structured, graphical format. These entities can be anything from people, places, and things, to concepts and ideas. The relationships between these entities can be represented as edges or arcs in the graph, and can include things like "is-a," "part-of," and "has-a." Knowledge graphs can be used to store and organize large amounts of structured and unstructured data, and are commonly used in fields such as artificial intelligence, natural language processing, and semantic web. They can be used to improve search and recommendation systems, as well as to provide more contextually relevant information to users.
Knowledge graphs are firmly in the symbolic computing camp of AI where conceptual ‘entities’ and relationships are represented in a graphical representation. They are extremely useful to validate and compute facts in the given domain from verified sources of information like research articles and datasets.
Best of Both worlds
For decades there’s been a dichotomy in thinking about AI between “statistical approaches” of the kind ChatGPT uses, and “symbolic approaches” that are in effect the starting point for Wolfram|Alpha. But now—thanks to the success of ChatGPT—as well as all the work we’ve done in making the Wisecube platform understand natural language—there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.
It’s a tremendously powerful way of working. And the point is that it’s not just important for us humans. It’s equally, if not more, important for human-like AIs as well—immediately giving them what we can think of as computational knowledge superpowers that leverage the non-human-like power of structured computation and structured knowledge.
From the example above, ChatGPT gives us some generic information about EGFR but does not give us any precise information about the clinical trials that are related to EGFR.
The above screenshot is from the Wisecube platform and as you can see, it is possible to visually query the same question and get back precise answers to it powered by the knowledge graph engine.
The Path forward
As we have demonstrated, by combining the power of ChatGPT like Large Language models and Wisecube’s Knowledge graph platform we believe we can build something that exceeds the sum of its parts. We believe this is the future of Information Research and Information retrieval in general. We are going to move away from a Search/Document centric approach to a symbolic computation approach powered by Large Language Models and Knowledge Graphs.
The Wisecube Knowledge Graph Engine is a platform for unifying and synthesizing your private and public data for disparate data sources using Large Language Models (like ChatGPT). These sources include biomedical literature, chemical, protein, and side effects databases. You can also customize the graph to add proprietary data sources.
By centralizing biomedical knowledge and powering it with Large Language models, Wisecube can help organizations build a connected graph of concepts and evidence from millions of documents and databases, uncovering explicit connections and inferring undiscovered links using GPT. If you are looking to explore patterns, uncover insights and make discoveries in your biomedical research area, schedule a call with us today and get started with knowledge graphs.
Get in touch with us if you would like to learn more about how to combine Large Language Models like GPT and Knowledge Graphs and unlock the power of the siloed data in your organization.