Frenrug logo

How does it work?

There are a few main components to FrenRug:

  1. contracts
  2. Relay layer - the back-end that sits between rooms and the ML workflows.
  3. ML Workflows - agent that responds to users and a summarizer agent that aggregates decisions of trades from each bot.

ML Explainer

🧙 Frenrug Agent - Large language model inference

We built a Large Language Model agent for trading keys, using Infernet’s large language model inference service. The agent can respond to your messages and judge whether or not it wants to buy (or sell) your keys.

We did this through finding a popular open-source model, prompt engineering, and fine-tuning the model. We experimented with adding additional short-term memory to the model as well as retrieval augmented generation to the model for better inclusion of short-term memory, and more. This feature is coming soon!

We constructed the agent to be a bit picky with responses so users have to put in some effort to convince it.

The agent outputs a short judgement (Nothing, Buy, Sell), along with a short explanation for why it decided that way.

🧙🧙🧙 Grand Council of Frenrug Agents - Classical model summarizer

🔍 Trusting the Grand LLM council.

We know CT has some skeptical, skeptical users(namely: us, and you).

Trusting the grand LLM council? No way. The bots each seem warmed up to your responses, but aren’t buying your keys? The council must be rigged. Definitely not a skill issue.

Well, the Frenrug agent committed the individual agent decisions and shows you via a ZKP to let you know, it can be in fact, a skill issue.

You might still feel skeptical after this. You might object: how do I know the individual AI agent isn’t tempered with by the node? Such is life (without verifiable proof of inference and more). Zero-knowledge verification of LLMs are hard but stay tuned for more research from Ritual here.

If you’re not satisified by the below explanation and want more details, check out the zk-summarizer section.

A lot of settings will require multiple people or parties to come to a decision. The council simulates this setting for multiple Frenrug Agents.

How are we doing this? At a high-level, multiple LLM frenrug agents running consensus looks like the below.

Multiple parties each run their own LLM agent.

Multiple, separate parties are running a LLM agent agent (node) that expects to receive your message.

Broadcast the user message to each agent.

User’s message is broadcasted to each LLM agent bot.

Each agent individually adjudicates.

Each LLM agent comes to its own separate decision on the message.

Pre-commit the decision.

The contract pre-commits the outputs of all the individual council members. The output is transformed into embeddings before sending it to the summarizer model.

Summarize the decision of multiple agents.

A small, classical logistic regression model summarizes the decision of the council into a single decision (do nothing, buy, sell) once everyone made their pick.

Verify the decision.

Users can check the input and output of the summarizer model.

Return the output to the user.

The summarizer model reaches a final decision on doing nothing, buying, and selling. Users get to see this decision.

You get back the final decision, along with the individual copies of the decision. For more details on how training and verification works, checkout this summarizer section.

Smart Contracts