Knowledge graphs vs. vector databases

Thank you for tuning in to this audio only podcast presentation. This is week 144 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Knowledge graphs vs. vector databases.”

Don’t panic, the Google Scholar searches are coming in fast and furious on this one [1]. We had a footnote in the first sentence today. Megan Tomlin writing over at neo4j had probably the best one line definition of the difference by noting that knowledge graphs are going to be in the human readable data camp and vector databases are more of a black box [2]. I actually think that eventually one super large knowledge graph will emerge and be the underpinning of all of this, but that has not happened yet given that the largest one in existence Google holds will always remain proprietary. 

Combining two LLMs… right now you could call them one after another, but I’m not finding an easy way to pool them into a single model. I wanted to just say to my computer, “use Baysian pooling to combine the most popular LLMs from Hugging Face,” but yeah that is not an available command at the moment. A lot of incompatible content is being generated in the vector database space. People are stacking LLMs and working in sequence or making parallel calls to multiple-models. What I was very curious about was how to go about the process of merging LLMs, combining LLMs, actual model merges, ingestion of models, or even a method to merge transformers. I know that is a tall order, but it is one that would take so much already spent computing cost and move it from sunk to additive in terms of value. 

A few papers exist on this, but they are not exactly solutions to this problem. 

Jiang, D., Ren, X., & Lin, B. Y. (2023). LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion. arXiv preprint arXiv:2306.02561. https://arxiv.org/pdf/2306.02561.pdf  you can see more content related to this one here https://yuchenlin.xyz/LLM-Blender/

Wu, Q., Bansal, G., Zhang, J., Wu, Y., Zhang, S., Zhu, E., … & Wang, C. (2023). AutoGen: Enabling next-gen LLM applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155. https://arxiv.org/pdf/2308.08155.pdf 

Chan, C. M., Chen, W., Su, Y., Yu, J., Xue, W., Zhang, S., … & Liu, Z. (2023). Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201. https://arxiv.org/pdf/2308.07201.pdf 

Most of the academic discussions and even the cutting edge papers like AutoGen are about orchestration of models instead of merging, combining, or ingestion of many models into one. I did find a discussion on Reddit from earlier this year about how to merge the weights of transformers [3]. It’s interesting what things end up on reddit. Sadly that subreddit is closed due to a dispute over 3rd party plugins. 

Exploration into merging and combining Large Language Models (LLMs) is indeed at the frontier of machine learning research. While academic papers like “LLM-Blender” and “AutoGen” offer different perspectives, they primarily focus on ensembling and orchestration rather than true model merging or ingestion. The challenge lies in the inherent complexities and potential incompatibilities when attempting to merge these highly sophisticated models.

The quest for effectively pooling LLMs into a single model or merging transformers is a journey intertwined with both theoretical and practical challenges. Bridging the gap between the human-readable data realm of knowledge graphs and the more opaque vector database space, as outlined in the beginning of this podcast, highlights the broader context in which these challenges reside. It also underscores the necessity for a multidisciplinary approach, engaging both academic researchers and the online tech community, to advance the state of the art in this domain.

In the upcoming weeks, we will delve deeper into the community-driven solutions, and explore the potential of open-source projects in advancing the model merging discourse. Stay tuned to The Lindahl Letter for a thorough exploration of these engaging topics.

Footnotes:

[1] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C6&q=knowledge+graph+vector+database&btnG= 

[2] https://neo4j.com/blog/knowledge-graph-vs-vectordb-for-retrieval-augmented-generation/ 

[3] https://www.reddit.com/r/MachineLearning/comments/122fj05/is_it_possible_to_merge_transformers_d/ 

What’s next for The Lindahl Letter? 

  • Week 145: Delphi method & Door-to-door canvassing
  • Week 146: Election simulations & Expert opinions
  • Week 147: Bayesian Models
  • Week 148: Running Auto-GPT on election models
  • Week 149: Modern Sentiment Analysis

If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

the careful curation of content

It’s one of those things where it is hard to put words on a page about it. Working with one of the chat systems to make content seems to trivialize the writing process. My day starts with an hour of focused academic work. That time is the fulfilled promise of decades of training that included a lot of hard work to get to this point. I can focus on a topic and work toward understanding it. All of that requires my focus and attention on something for that hour. Sometimes on the weekends I spend a couple of hours doing the same thing on a very focused topic. Those chat models with their large language model backends (LLM) produce content within seconds. It’s literally like a 1:60 ratio for output. It takes me an hour to produce what it creates within that minute including the time for the user to enter the prompt. 

Maybe I did not expect this type of interaction to really affect me in this way. Everything has been questioned in terms of my writing output and what exactly is going to happen now. The door has been flung open to the creation of content. Central to that problem is the reality that the careful curation of content within academics and the publish first curation of the media are going to get flooded. Both systems are going to get absolutely overloaded with submissions. Something has to give based on the amount of attention that exists. They are not minting any new volume of attention and the channels for grabbing that attention are relatively limited. The next couple of years are going to be a mad scrabble toward some sort of equilibrium between the competing forces of content curation and flooding.

This really is something that I’m concerned about on an onboarding basis. Do all the books, photos, articles, and paints in the before times just end up with a higher value weighting going forward? Will this AI revolution have cheapened the next generation of information delivery in ways we will not fully get to appreciate until the wave has passed us and we can see the aftermath of that scenario? Those questions are at the heart of what I’m concerned about. Selfishly they are questions about the value and purpose of my own current writing efforts. More broadly they are questions about the value of writing within our civil society as we work toward the curation of sharable knowledge. We all work toward that perfect possible future either with purpose or without it. Knowledge is built on the shoulders of the giants that came before us adding to collective understanding of the world around us. Anyone with access and an adventurous spirit can pick up the advancement of some very complex efforts to enhance the academy’s knowledge on a topic. 

Maybe I’m worried that the degree of flooding with flatten information so much that the ability to move things forward will diminish. Sorting, seeking, and trying to distill value from an oversupply of newly minted information may well create that diminishing effect. We will move from intellectual overcrowding in the academy to just an overwhelming sea of derivative content marching along beyond any ability to constrain or consume. I’m going to stop with that last argument as it may be the best way to sum this up.