This is part 1 of a 3-part AI go-to-market strategy that I’ve been working on. Parts 2 and 3 are more about creating a mote / unique position, but they involve some economics questions I haven’t worked out yet. Anyhow, here is part 1.
Introduction
Today, if you ask ChatGPT or Google’s Gemini about the current weather conditions in a given city, you’ll get an accurate answer. But if you ask for the current balance for a public cryptocurrency address, you’ll be told to go elsewhere to find it. ChatGPT, Gemini, and other LLMs don’t work with blockchain data by default because there isn’t a default RAG/API provider that the AI builder community sees as a go-to solution for accessing blockchain data. Pocket could be that go-to RAG/API provider, which would not require any new development efforts.
Retrieval-augmented generation (RAG)
Retrieval-augmented generation (RAG) enables LLM-based AI systems to use data to augment prompts with accurate data the LLM can use to respond to requests. This can be possible through calls to a third-party application programming interface (API). For instance, when you ask ChatGPT or Gemini for the current weather conditions in a particular city, the model generates code that is executed to call an API and retrieve the current data from a weather API. The API response is then used to prompt the LLM for the final response. With Pocket, the same process could work for blockchain data.
In more extensive and advanced AI systems like ChatGPT and Gemini, RAG happens automatically for well-known APIs. Additionally, OpenAI, LangChain, and others have begun providing specifications to effectively show LLMs how to call lesser-known APIs. With these specifications, the LLMs can accurately infer the code that needs to be written to call the APIs for the RAG process. While there aren’t current standards that all platforms adhere to yet, those standards are in the works.
Developers who have worked with Swaggar/OpenAPI specifications will be familiar with using the specifications currently defined by OpenAI and supported by LangChain. This basically involves using Swaggar/OpenAPI specifications to describe a function-calling process in a way that ensures the model can generate consistently reliable code. Here is a simple example that I used to enable an OpenAI GPT to get the current balance for an ETH account.
openapi: 3.0.0
info:
title: Ethereum RPC API
description: This is an Ethereum blockchain JSON-RPC API spec designed for the `eth_getBalance` method.
version: 1.0.0
servers:
- url: https://eth-mainnet.rpc.grove.city/v1/61dda421c741ae003bf4afaf
description: Ethereum Mainnet RPC Server
paths:
/:
post:
operationId: ethGetBalance
summary: Returns the balance of the account of given address.
tags:
- Ethereum RPC
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
jsonrpc:
type: string
example: '2.0'
method:
type: string
example: 'eth_getBalance'
params:
type: array
items:
oneOf:
- type: string
description: Account address in hexadecimal format.
- type: string
description: Block parameter (e.g., 'latest', 'earliest', block number in hex).
minItems: 1
example: ["0x407d73d8a49eeb85d32cf465507dd71d507100c1", "latest"]
id:
type: integer
example: 1
required:
- jsonrpc
- method
- params
- id
responses:
'200':
description: A JSON object containing the balance of the account
content:
application/json:
schema:
type: object
properties:
jsonrpc:
type: string
id:
type: integer
result:
type: string
required:
- jsonrpc
- id
- result
In the above example, note that the API is a Grove.City RPC endpoint for the ETH mainnet. It’s using my free tier on Grove.City, so please don’t use up my limited free API credits . But seriously, this brings up a good point. Some free-tier will be needed. Maybe this is something that PNF could support. But it’s also possible to effectively instruct the LLM to respond with a message telling the end user they need to get a paid account. That’s beyond this document’s scope but not super involved. But the smart people at Grove, Nodies, Poktscan, COD3R, etc won’t have a problem figuring that out.
High-Level Strategy
At this point, the go-to-market strategy I recommend just involves creating tutorials and code examples for the AI community. This could be done at the PNF level, by Gateway providers, or community members. There are literally hundreds of active AI projects but I’d start with examples/tutorials for OpenAI assistants, OpenAI GPTs, OpenAI function calling, AutoGPT, AutoGen, and LangChain. I would also recommend outreach efforts to the product leaders on larger projects to understand how Pocket could become a default for products like ChatGPT, and Gemini.
Sense of Urgency
I’d love to say this is some brilliant idea nobody else is considering. It’s not. The only edge Pocket might have is that nobody else is visibly doing anything in the AI communities that I’m aware of. But it would be naive to think that Infura, Ankr, Chainstack, Quicknode, Alchemy, and others aren’t working on similar strategies or won’t be in the very near future. It’s also naive to think that AI systems won’t soon be working with blockchain data as effortlessly as they are working with weather APIs.
Next Steps
If everyone agrees that this strategy makes sense, I’m happy to elaborate and collaborate to make it happen.