TechTalks
Subscribe
Sign in
Home
Archive
About
Latest
Top
Discussions
How LLMs and VLMs are revolutionizing robotics
We are still scratching the surface of what is possible in the physical world with LLMs and VLMs
8 hrs ago
•
Ben Dickson
3
Share this post
How LLMs and VLMs are revolutionizing robotics
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
ReFT outperforms PEFT methods in fine-tuning LLMs
Representation Fine-Tuning (ReFT) is a technique to fine-tune LLMs for specific tasks based by only modifying a small fraction of their representations.
Apr 15
•
Ben Dickson
1
Share this post
ReFT outperforms PEFT methods in fine-tuning LLMs
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
How LLMs will kill Medium's business model
Medium is struggling to adapt its Partner Program to the deluge of LLM-written content. Here is what it means for centralized content platforms.
Apr 11
•
Ben Dickson
9
Share this post
How LLMs will kill Medium's business model
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
6
LLMs, from playing Street Fighter to real-time applications
A project sets LLMs to play real-time games like Street Fighter III. The findings have implications for application with real-time requirements.
Apr 10
•
Ben Dickson
4
Share this post
LLMs, from playing Street Fighter to real-time applications
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
Understanding the security of open-source machine learning models
Greg Ellis, GM of Application Security at Digital.ai, delves into the evolving landscape of machine learning security.
Apr 8
•
Ben Dickson
4
Share this post
Understanding the security of open-source machine learning models
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
Single-instruction fine-tuning of Llama-2
Claude-llm-trainer is a Google Colab project that fine-tunes Llama-2-7B with a single description about the downstream task.
Apr 4
•
Ben Dickson
2
Share this post
Single-instruction fine-tuning of Llama-2
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
2
The fastest and most-efficient prompt compression technique
LLMLingua-2 is a prompt compression technique by Microsoft that can reduce the size of prompts by up to five times.
Apr 1
•
Ben Dickson
3
Share this post
The fastest and most-efficient prompt compression technique
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
March 2024
The alchemy of AI in material science
By Valentyn Volkov Draw a mental picture of a medieval alchemist who had spent their entire life trying to mix substances to obtain, say, the…
Mar 28
•
Ben Dickson
1
Share this post
The alchemy of AI in material science
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
Creating foundational models through natural selection
Model merging is a cost- and compute-efficient way to create models that combine the components and capabilities of existing foundational models…
Mar 26
•
Ben Dickson
1
Share this post
Creating foundational models through natural selection
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
RAFT fine-tunes LLMs for better RAG performance
Retrieval Augmented Fine Tuning (RAFT) combines supervised fine-tuning with RAG to improve LLM domain knoweldge and ability to use in-context documents.
Mar 25
•
Ben Dickson
4
Share this post
RAFT fine-tunes LLMs for better RAG performance
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
Is cosine similarity the right measure for embedding models?
Netflix has done some of the most relevant work in ML-based recommendation systems. A new paper, based on internal research on recommendation systems at…
Mar 21
•
Ben Dickson
4
Share this post
Is cosine similarity the right measure for embedding models?
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
2
How to customize LLMs for topics they weren't trained for
New study provides insights on the effectiveness of LLM RAG and fine-tuning for topics that are not included in the model's training data.
Mar 18
•
Ben Dickson
3
Share this post
How to customize LLMs for topics they weren't trained for
bdtechtalks.substack.com
Copy link
Facebook
Email
Note
Other
Share
Copy link
Facebook
Email
Note
Other
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts