AI Go-to-Market Thoughts & High-Level Strategy - Part 1

This is part 1 of a 3-part AI go-to-market strategy that I’ve been working on. Parts 2 and 3 are more about creating a mote / unique position, but they involve some economics questions I haven’t worked out yet. Anyhow, here is part 1.

Introduction

Today, if you ask ChatGPT or Google’s Gemini about the current weather conditions in a given city, you’ll get an accurate answer. But if you ask for the current balance for a public cryptocurrency address, you’ll be told to go elsewhere to find it. ChatGPT, Gemini, and other LLMs don’t work with blockchain data by default because there isn’t a default RAG/API provider that the AI builder community sees as a go-to solution for accessing blockchain data. Pocket could be that go-to RAG/API provider, which would not require any new development efforts.

Retrieval-augmented generation (RAG)

Retrieval-augmented generation (RAG) enables LLM-based AI systems to use data to augment prompts with accurate data the LLM can use to respond to requests. This can be possible through calls to a third-party application programming interface (API). For instance, when you ask ChatGPT or Gemini for the current weather conditions in a particular city, the model generates code that is executed to call an API and retrieve the current data from a weather API. The API response is then used to prompt the LLM for the final response. With Pocket, the same process could work for blockchain data.

In more extensive and advanced AI systems like ChatGPT and Gemini, RAG happens automatically for well-known APIs. Additionally, OpenAI, LangChain, and others have begun providing specifications to effectively show LLMs how to call lesser-known APIs. With these specifications, the LLMs can accurately infer the code that needs to be written to call the APIs for the RAG process. While there aren’t current standards that all platforms adhere to yet, those standards are in the works.

Developers who have worked with Swaggar/OpenAPI specifications will be familiar with using the specifications currently defined by OpenAI and supported by LangChain. This basically involves using Swaggar/OpenAPI specifications to describe a function-calling process in a way that ensures the model can generate consistently reliable code. Here is a simple example that I used to enable an OpenAI GPT to get the current balance for an ETH account.

openapi: 3.0.0
info:
  title: Ethereum RPC API
  description: This is an Ethereum blockchain JSON-RPC API spec designed for the `eth_getBalance` method.
  version: 1.0.0
servers:
  - url: https://eth-mainnet.rpc.grove.city/v1/61dda421c741ae003bf4afaf
    description: Ethereum Mainnet RPC Server
paths:
  /:
    post:
      operationId: ethGetBalance
      summary: Returns the balance of the account of given address.
      tags:
        - Ethereum RPC
      requestBody:
        required: true
        content:
          application/json:
            schema:
              type: object
              properties:
                jsonrpc:
                  type: string
                  example: '2.0'
                method:
                  type: string
                  example: 'eth_getBalance'
                params:
                  type: array
                  items:
                    oneOf:
                      - type: string
                        description: Account address in hexadecimal format.
                      - type: string
                        description: Block parameter (e.g., 'latest', 'earliest', block number in hex).
                  minItems: 1
                  example: ["0x407d73d8a49eeb85d32cf465507dd71d507100c1", "latest"]
                id:
                  type: integer
                  example: 1
              required:
                - jsonrpc
                - method
                - params
                - id
      responses:
        '200':
          description: A JSON object containing the balance of the account
          content:
            application/json:
              schema:
                type: object
                properties:
                  jsonrpc:
                    type: string
                  id:
                    type: integer
                  result:
                    type: string
                required:
                  - jsonrpc
                  - id
                  - result

In the above example, note that the API is a Grove.City RPC endpoint for the ETH mainnet. It’s using my free tier on Grove.City, so please don’t use up my limited free API credits :blush:. But seriously, this brings up a good point. Some free-tier will be needed. Maybe this is something that PNF could support. But it’s also possible to effectively instruct the LLM to respond with a message telling the end user they need to get a paid account. That’s beyond this document’s scope but not super involved. But the smart people at Grove, Nodies, Poktscan, COD3R, etc won’t have a problem figuring that out.

High-Level Strategy

At this point, the go-to-market strategy I recommend just involves creating tutorials and code examples for the AI community. This could be done at the PNF level, by Gateway providers, or community members. There are literally hundreds of active AI projects but I’d start with examples/tutorials for OpenAI assistants, OpenAI GPTs, OpenAI function calling, AutoGPT, AutoGen, and LangChain. I would also recommend outreach efforts to the product leaders on larger projects to understand how Pocket could become a default for products like ChatGPT, and Gemini.

Sense of Urgency

I’d love to say this is some brilliant idea nobody else is considering. It’s not. The only edge Pocket might have is that nobody else is visibly doing anything in the AI communities that I’m aware of. But it would be naive to think that Infura, Ankr, Chainstack, Quicknode, Alchemy, and others aren’t working on similar strategies or won’t be in the very near future. It’s also naive to think that AI systems won’t soon be working with blockchain data as effortlessly as they are working with weather APIs.

Next Steps

If everyone agrees that this strategy makes sense, I’m happy to elaborate and collaborate to make it happen.

10 Likes

Strongly in support of this, and played around with making some Grove / POKT based OpenAI specs this morning. Seems like it would be easy to turn out a ton of content around this very quickly.

4 Likes

As @steve points out this is an easy thing to do and we have the know how to do it.
We are not stragers to RAGs, @Olshansky and us created RAGs for the Pocket ecosystem, both based on langchain (using slightly different approaches) a long time ago (in community excitement scale).

Exposition to AI communities will be fundamental and this is the correct way to start IMO.
However there are some details that should be clearly stated:

  • Right now the only access to the Pocket Network is through Gateways (namely Grove and Nodies), so any interaction of the RAG with blockchain will be handled by them. So, creating an OpenAPI (a formal and correct one) should be the work of the gateways as their APIs may behave differently.
  • The Pocket Network will be able to create their API once the Shannon update is implemented (eanbling permissionless Apps) , only at that point the Pocket API packages can be created.

What we are proposing to do in the meantime is:

  • Create a small LangServe example of a RAG that access blockchain data.
  • It will be using Pocket Network (Nodies free endpoints).
  • No OpenAPI support, only a chain and tools (langchain-chain) that can be easily added to other Langchain projects (enter the AI community).

Then (as features are available) we can expand it to:

  • Use Pocket hosted LLMs (probably though gateways first)
  • Replace gateways endpoints with Apps keys and have exact same functionality.

Once this is ready, we can start coding a formal Pocket Network API for LangChain or any other framework.

2 Likes

Thanks for your feedback here, @RawthiL - I know you and @Olshansky understand this very well. So having you guys weigh in as community leaders with deep experience on both sides (pocket + AI) is essential for an initiative like this to be successful.

I have a question for you on this. Someone on my team at Dabble Lab suggested that a proxy could be set to round-robin relays to the different portal providers. If the proxy was created using a pokt.network DNS name that would allow us to create endpoints that don’t favor one gateway. Also, when new gateways come on, they could be added. This would spread responsibility and opportunity while keeping control at the PNF level. What are your thoughts on this? Is that a viable option?

Can you expand on this? Are you saying you’d just create a LangChain tool but not the OpenAPI/Swaggar specifications? If so, I can have someone from my team do that, or I can show community members how to do it.

If setting up a proxy using CloudFlare would work, maybe we could skip this step?

LOVE THIS IDEA!

2 Likes

Sure, I like the idea, not so fan of coding it though hahaha

Exactly, we will be using custom tools. We can use an API tool if you can create a Swaggar specification, but we don’t want to block the RAG creation due to the lack of a proper API. I think that we can work in parallel, if you get the API working, we can just implement it, meanwhile we hardcode the tool.

I dont think that will work, going from gateways to app key access will need to change a lot. Not only the request itself will be very different (including Pocket specific data), it will also need to include use the Pocket relayer code (and many details that are not clear yet).

2 Likes

Thanks again @RawthiL

I’m pretty sure I can get someone from my team to code it. We’ve done something similar and I don’t recall it being a big lift. I’ll confirm, however.

That makes sense to me.

Agreed, we don’t want to create a maintenance requirement. I’m just wondering if there could be a way to provide one set of endpoint URLs that would never need to change publicly but might require API key on the downstream requests. For example, suppose we publish https://eth-mainnet.rpc.pokt.network/v1/ as the public ETH endpoint. The proxy might have to include paid gateway endpoints downstream to keep up with traffic if it grows. Maybe the PNF pays the gateways or something. Not sure how that works but it’s a plausible scenario.

3 Likes

Hey @RawthiL, one of my teammates at Dabble Lab, @Bashiru got a simple proxy setup and some sudo code for a LangChain example.

NOTE: The proxy is set up on CloudFlare using a paid account that can scale if needed. Dabble Lab will cover those costs, but the intent here is that this is for demo and dev testing. Not production. Also, we can setup endpoints for all the other chains easily. But we’d want to coordinate with Grove and Nodies before doing that.

The proxy ETH endpoint is https://eth-mainnet.rpc.pokt.dev/v1/. It can be set up to send round-robin relays to Grove, Nodies, or any other provider to remove any biases.

It would be cool if Grove (ping @gabalab) and Nodies (ping @poktblade) could provide monitored endpoints to monitor the usage of the LangChain demo code.

Here is the code example that @Bashiru provided me. The load_ethereum_openapi_docs method could call a URL with the OpenAPI specs for all the ETH methods. I believe @gabalab already set something up for this in a GitHub repo, so if the repo is public, we could just use the .yaml endpoint for the doc.

from langchain.chains import APIChain

// this can be the openapi docs for ethereum rpc
ethereum_rpc_docs = load_ethereum_openapi_docs()  

chain = APIChain.from_llm_and_api_docs(
    llm,
    ethereum_rpc_docs,
    headers={}, 
    verbose=True,
    limit_to_domains=["https://eth-mainnet.rpc.pokt.dev/v1/"],
)
result = chain.run("What is the block height of the Ethereum mainnet?")

Again, just an example. You might have something more robust in mind. But I thought we’d get the ball rolling with the proxy at least.

4 Likes

Thanks, Steve. We created this repo and it is public. Anyone is welcome to contribute and you can add the LangChain files as well. In that repo, we reference an endpoint that can be used in the round robin. It’s on our free tier but that should be enough to start with (100k requests/day, don’t abuse pls :slight_smile: ). I will monitor the usage on our side.

Do you have an idea of where you would like to circulate these specs? I was thinking of communities such as Hugging Face or Ollama. We should get a list going.

4 Likes

Wow, great, we are actually finishing up the basic agent, I’ll test it with the endpoint shortly.

Regarding the OpenAPI, the sooner we have that running the better, for us is just changing the tool definition.

In the mid/long term we will have to develop two things:

  • OpenAPI documentation to use APIChain: This will require gateways to comply on the request format, but if we have at least one on board it is enough to have it effectively branded it as Pocket-based.
  • A full pocket-relayer package and tool: We will need this to be able to use App keys to access data. This will be agnostic of gateways and provide 100% decentralized access to pocket.

Great! this should not be necessary. Maybe some better method descriptions, I’ll see what happens as it is now.

4 Likes

@Bashiru thinks he might be able to get a model to generate the OpenAPI specs for the different chains. He will push them to the repo that @gabalab provided when he gets it working. Assuming he’s able to get it working, of course.

Hey @RawthiL can you expand on this? I’m not fully clear on what the App keys are for. Is this for cases when a user wants to use a secure endpoint?

Also, would that be sent as a bearer token?

2 Likes

We have created the WORK IN PROGRESS repository with a minimum viable LangChain agent. You can find the repository here:

The agent works on the endpoint /pocket_agent/invoke, which uses a tool based on the web3 python package and the public endpoint (Nodies or the Round-Robin one) to have access to ETH balances.
There is a second agent implemented that uses the an OpenAPI chain that reads the Grove’s specification of eth_getBalance. This one is not working, I fear that is due to lack of clarity, I have created an issue to track this in the pokt square repo. It is better to keep conversation on this subject there.

Using App keys is the way to have real permisionless and decentralized access to Pocket. Gateways are centralized and route your requests, if you want to avoid that you will need to manage your App keys and the Pocket sessions.
I have been thinking about this, maybe it is not a module for LangChain, but a side service that we will need to add later to the deployment. Now I think it is irrelevant to this conversation as we can make the side service to manage all Pocket complexity and accept calls like any other endpoint.

6 Likes

May be completely uninformed but Ollama can rename models would that work in this case? Just rename llama3 to gpt3.5-turbo and send the request through the ollama code that supports embeddings anyway. Again maybe i am completely uninformed @RawthiL knows best

1 Like