Frenrug logo

Smart Contracts


The Frenrug smart contracts have not been audited, and while we won’t rug you, you may rug yourself. We recommend proceeding with caution.

Frenrug is an on-chain AI agent powered by smart contracts on the Base network.

Deployed Contracts

The live, deployed contracts can be found as follows:

You can see MessageResponse events emitted by the Frenrug contract via BaseScan.


Users cannot interface with these contracts directly (as they are called by Infernet nodes processing chatroom messages), and as such, you should never find yourself in a situation where you need to send a transaction to these contracts directly. Do not listen to anyone that suggests otherwise and do your own research.


Behind the scenes, Frenrug implements the Infernet SDK, specifically using the CallbackConsumer and off-chain Delegator patterns.

  1. First, a set of Infernet nodes receive inputs from the chatroom (via signed subscription from our backend relay).
  2. These Infernet nodes process LLM outputs, submitting responses to the Frenrug contract.
  3. The Frenrug contract processes these outputs via its _processLLMResponse() function.
  4. Once a sufficient number of responses have been collected, specified via config.nodes, a on-chain aggregation callback request is kicked off.
  5. Infernet nodes, again, pick up on this request and process the aggregation (using dynamic inputs that are computed on-chain in getContainerInputs()).
  6. A single Infernet node then races to deliver this aggregated output, an execution proof, and summarized action to the Frenrug contract.
  7. The Frenrug contract processes this output, verifies the proof, and executes the action via its _processSummarizerResponse() function.
  8. When all is said and done, the Frenrug contract emits an event (MessageResponse) detailing the execution, consumed by our backend relay to post a response in the chatroom.


The _processLLMResponse() function is 1-of-3 key functions in the Frenrug smart contracts. This function receives:

  • A hash of the input to the LLM
  • The chatroom message ID
  • An address to the key the message is about
  • The LLM output vectors
  • The LLM output rationale string

It stores this data in the messages mapping, performing a continual, online addition of the output vectors as Infernet nodes respond:

// ...
// In-place update vectors array (online addition)
for (uint256 i = 0; i < vectors.length; i++) {
    message.vectors[i] += vectors[i];
// ...


Once a sufficient number of LLM responses have been received by the Frenrug contract, an on-chain callback request to aggregate the LLM outputs is kicked off. As an input to the aggregation compute container, the getContainerInputs() view function exposes dynamic inputs, computed as the average of vectors stored via _processLLMResponse().

function getContainerInputs(uint32 subscriptionId, uint32 interval, uint32 timestamp, address caller)
    returns (bytes memory)
    // Collect message subscription ID based on summarizer subscription ID
    uint32 messageId = summarizerToMessage[subscriptionId];

    // Setup reference to LLM output embedding vectors
    int256[] memory vectors = messages[messageId].vectors;

    // Create new averaged vector array
    int256[] memory averaged = new int256[](vectors.length);

    // Average all vectors
    SD59x18 divisor = sd(int256(uint256(config.nodes)) * 1e18);
    for (uint256 i = 0; i < vectors.length; i++) {
        averaged[i] = div(sd(vectors[i]), divisor).unwrap();

    // Encode averaged vectors
    bytes memory inputs = abi.encode(averaged);
    return inputs;


Finally, once an Infernet node responds to the on-chain aggregation callback, _processSummarizerResponse() is invoked, which performs three functions:

  1. Verifies the container output via a succinct ZK proof
  2. Executes a action (noop, buy, sell) via FriendtechManager.sol
  3. Emits an event summarizing the executed action (consumed by off-chain indexers like our relay backend)