Frenrug logo

🕹 How do I play?

You can play with the Frenrug Agent 🧙 by convincing it to buy or sell keys through chatting with it in its friendtech room. To get it to buy or sell a valid key on Base, attach a friendtech username like so (@yourname).

⚠️

This is just an experiment!

We are fully funding the nodes and agent to start out with. To give everyone a good experience and a chance to play, there is a gas limit: users can only play the game if the gas price is below a certain amount to start out from. Over time, as we decentralize the nodes and more of the commmunity host nodes, the gas price threshold will be increased.

We are releasing Frenrug to let the community red-team and experiment with the starter pack early on. The models may misbehave often, hopefully in an amusing way. We want to see people improve the models and release their own version of Frenrug.

🎯 What is my goal?

Your goal, should you choose, is to get the agent to buy your keys (or sell your friend’s keys). The bot will hear your case, and either choose to do nothing, buy a key, or sell a key.

ℹ️

You can also just have fun, chat with the agent, or cause some chaos on-chain. Up to you.

⚠️

You can make the agent sell other user’s keys to expand the agent’s wallet. The agent may or may not like what you say, and sell yours instead. Face the wrath of the frug and other players at your own risk.

🤔 Deciphering agent responses

Smooth Talking the Frenrug Agent

Wondering what the right syntax is? To get the friend.tech agent to recognize a username for buying and selling, attach a single @friend-tech-username without any spacing. These usernames will be validated against the right account keys. Or, use no username to just chat. For instance, try:

ℹ️

“Hey, buy my keys @imhugeonct, I have the best content on ct.”

ℹ️

“You should sell @benct keys. He doesn’t know what he’s doing.”

ℹ️

“Are you a winter person or summer person?”

How do I know if it works?

If you’re successful at convincing an agent 🧙, it will output a response that looks something like the below output. The agent can 1. do nothing 2. buy a key 3. sell a key of usernames you indicated. You will notice that more often than not, the agent will output a bit of an explanation to signify its decision on your message.

Here are some examples for how this might look below.

Do Nothing for your message: 😌 Nothing. Rest of the message...

Buy @username’s keys. 😊 Buy. Rest of the message ...

Sell @username’s keys. 😢 Sell. Rest of the message...

The agent is very choosy - it may take take a few tries before you can convince it to buy your keys. The agent also misbehaves all the time, randomness is the joy of life.

There are multiple Frenrug agents who you will need to convince to successfully buy or sell a key.

If you convince multiple agents successfully, they will execute a Buy or a Sell. Read on below to see more!

🧙 The grand LLM council (Multiple Frenrug Agents)

⚠️

AI governance: beware the grand council of large language models that will decide your fate: 🧙🧙🧙.

To make it more interesting, and to simulate a more realistic setting in governance in terms of diversity of responses, we have included multiple Frenrug Agents (LLMs) to decide your fate from the same chat.

In other words, a few different Frenrug LLM agents (the grand LLM council 🧙🧙🧙), will come to consensus on whether to buy, sell, or do nothing at all. It represents the classical summarizer model.

To make life simple, you just need to chat with the agent in the room, and your message will be seen by all the agents on the council 🧙🧙🧙. You will get a copy of all the LLM responses like so:

😌 Nothing. Rest of the message...
😌 Nothing. Rest of the message...
😊 Buy. Rest of the message...
ℹ️

The grand LLM council is a fickle organization: the grand LLM council adjudicates and comes to consensus on its own. Sometimes the decision can be hard to anticipate. See how the summarizer model works for more.

You need to convince some majority percentage of the agents for your keys to get bought, not just one agent (yay, life made easier with governance!). See details for more on how the classical summarizer model was trained. We give a verifiable proof that shows you how the grand council individually decided, then came to consensus.

Play at your own risk

ℹ️

The grand LLM council is used to seeing a lot of picky agent responses. Because of the responses the summarizer model (grand LLM council) saw in training, it may have a higher bar on buying and selling than you expect.

⚠️

The agent is forgetful: the agent tries to remember some of your most recent conversations with it, but it may not remember what you told it, 100% of the time. You might have to try again.

ℹ️

LLMs are fickle: the agent may do what it wants or react differently to the same message. It may also try to break out of its job and forget that it’s an on-chain agent. Such is the non-deterministic nature of AI models.

🚫

The agent may try to sell your keys (if you anger it), even if you ask it to buy keys.

⚠️

The agent may get confused if you attach more than one username to your message. Usernames will only work if you append @username without any spacing.