Code written by LLMs, website copy written by LLMs trashing competitors, Hacker News post written by LLMs, as if it were for LinkedIn. AI images everywhere. I dunno man, this is really sloppy.
We are firing on a lot of cylinders building out a platform across a browser extension, cloud dashboard, API, and now embeddable web agents. We are a team of two leveraging AI to 100x our output and build out all of these surfaces.
The underlying agentic performance though is undeniable.
Also like whats the point if something is AI generated, we do thorough review to ensure readibility, coherence and accuracy.
I don't agree that it's possible to output this much code with an LLM and also be sure that you know the codebase. Chances are very close to 100% that the entire stack is vibe coded and you rely on LLMs to fix bugs too
Didn't Anthropic say Claude Cowork was completely written by Claude Code and hence "vibe-coded"?
All that matters is that the deliverable agent works seamlessly without bugs/issues, which our web agent excels at even beating OpenAI Operator and Anthropic CUA in benchmarks!
The DOM-native architecture is a clever way to bypass API integration, but it introduces significant operational risk regarding state management. Since this agent executes checkouts and form fills, a hallucination here isn't just wrong text—it’s potentially an erroneous charge or data loss. How do you handle liability or remediation if the agent misinterprets a UI element and executes an unwanted transaction? Does the script enforce a "human-in-the-loop" confirmation step for high-stakes actions like payment submission, or is the goal full autonomy regardless of confidence levels?
We are still thinking through on the optimal usecases. Right now you can configure blocklists on url paths that the agent won't work on.
On the reliability front we offer integrations like Recordings to ground the agent on trajectories even as the underlying website updates and Knowledge Base of your whole domain.
You the website owner can provide additional guidance to the agent.
Originally called Retriever, based on the domain. Trademark issues?
So it's a bunch of tools that Gemini can call, but the tools involve low-level interactions with the page structure in the end-user's browser.
What is the moat? What is an "agent" when you take away the powerful LLM?
Rover lives inside your website
Rover does not just live "inside" my website, because you are using Gemini 3 Flash to do all the heavy lifting.
Who is the audience here? It sounds like you're addressing people who don't know how the technology works, but the cutesy concept is borderline misleading.
Also, can you back up this claim with a human-written response? (emphasis mine)
When rtrvr.ai interacts with a webpage, there is zero automation fingerprint:
No navigator.webdriver flag
No CDP-specific JavaScript objects
*No detectable automation patterns in network requests*
*Identical timing characteristics to human interaction*
So our core technical moat is building up an agentic harness that can represent and take actions on any webpage without any screenshots. With this approach we even beat custom trained models like OpenAI Operator and Anthropic CUA:
https://www.rtrvr.ai/blog/web-bench-results
Everyone else in the space just takes a screenshot and asks a model what coordinates to click, our core thesis is that LLMs understand semantic representations fundamentally better than vision. But with this DOM approach there is a long tail of HTML/DOM edge cases to cover that we have built out for with the 20k+ users bringing these edge cases.
Soon you will be able to record demonstration tasks via our partner Chrome Extension as well as setup knowledge bases scraped by our Cloud browsers to provide additional context to the agent. So there is a platform moat as well.
The audience is website owners who want to increase visitor engagement and conversion via a conversational interface for users.
This is more for our cloud browser platform where we launch cloud browsers for vibe scraping controlled via a custom extension instead of CDP. You can try it out at rtrvr.ai/cloud, where we can get back some strong antibot detection sites like google.com
"Sync your logged-in browser sessions from the Extension to Cloud browsers. Access authenticated sites at scale."
Genuine question, have you seen 'cookie syncing' before (google, perhaps)? If so, what was it used for? I could understand if it they were cookies for your service only. It sounds like a security nightmare waiting to happen.
So with the rise of cloud browser agents, identity management is a critical problem.
There are a lot of users on our rtrvr.ai/cloud platform wanting to automate sites where they have to login to and right now are resorting to including usernames/passwords in prompts (presumably these are non critical sites that they dont really care about). We are offering a more secure option of just syncing cookies from our Chrome Extension to our cloud browsers so the agents will always be logged in without any credential exposure.
You choose the specific domains for the cookies you want synced so its not like all your cookies are syncing.
We have seen people hire consultants to build RAG pipelines, maintain them only to be QA Agent when the world is shifting losing traffic to the likes of ChatGPT and Google comes up with a protocol that makes you do more work with WebMCP essentially maintain a lot of APIs for your UI and DBs. Rover is the answer to these painpoints. No RAG pipelines, just one script tag your users engage, retain, get work done on your website without ever leaving it and fully conversational.
Essentially, are you saying 'dont give data to Google, but give to us'?
I can't see how the merchant websites own anything given that the script points to your website and everything else is a blackbox.
Just wondering...
Hey, we have various configurations available even in this very early preview: https://rover.rtrvr.ai/docs/configuration and you can turn off telemetry so we don't collect the data . As a website you can also configure to host the script on your preferred endpoint and our 'apiBase' helps with that pointing to custom domains. And based on project data needs we are also open to tighter data policy rules.
If the agent is in your site you are at much less mercy of Google's agent to:
- redirect users to another site
- the user's focus is shifted from your site to whats going on in the agentic chat (in which Google can serve ads/recommendations) outside of your control.
- we want to be long term partners to websites to craft the agentic experience between them and their users
This isn't a ______. This isn't a ______. This is ______.
The message is clear: ______ isn't a nice-to-have. It's a ______.
But here's what nobody's talking about: ...
I can't force myself to spend more thought reading than was spent writing.
"I can't force myself to spend more thought reading than was spent writing". I'm going to use this quote a lot.
Thanks for the feedback!
We completely rewrote our launch blog post and included a demo video of the embedded agent live!
Code written by LLMs, website copy written by LLMs trashing competitors, Hacker News post written by LLMs, as if it were for LinkedIn. AI images everywhere. I dunno man, this is really sloppy.
We are firing on a lot of cylinders building out a platform across a browser extension, cloud dashboard, API, and now embeddable web agents. We are a team of two leveraging AI to 100x our output and build out all of these surfaces.
The underlying agentic performance though is undeniable.
Also like whats the point if something is AI generated, we do thorough review to ensure readibility, coherence and accuracy.
I don't agree that it's possible to output this much code with an LLM and also be sure that you know the codebase. Chances are very close to 100% that the entire stack is vibe coded and you rely on LLMs to fix bugs too
Didn't Anthropic say Claude Cowork was completely written by Claude Code and hence "vibe-coded"?
All that matters is that the deliverable agent works seamlessly without bugs/issues, which our web agent excels at even beating OpenAI Operator and Anthropic CUA in benchmarks!
The DOM-native architecture is a clever way to bypass API integration, but it introduces significant operational risk regarding state management. Since this agent executes checkouts and form fills, a hallucination here isn't just wrong text—it’s potentially an erroneous charge or data loss. How do you handle liability or remediation if the agent misinterprets a UI element and executes an unwanted transaction? Does the script enforce a "human-in-the-loop" confirmation step for high-stakes actions like payment submission, or is the goal full autonomy regardless of confidence levels?
We are still thinking through on the optimal usecases. Right now you can configure blocklists on url paths that the agent won't work on.
On the reliability front we offer integrations like Recordings to ground the agent on trajectories even as the underlying website updates and Knowledge Base of your whole domain.
You the website owner can provide additional guidance to the agent.
Originally called Retriever, based on the domain. Trademark issues?
So it's a bunch of tools that Gemini can call, but the tools involve low-level interactions with the page structure in the end-user's browser.
What is the moat? What is an "agent" when you take away the powerful LLM?
Rover does not just live "inside" my website, because you are using Gemini 3 Flash to do all the heavy lifting.Who is the audience here? It sounds like you're addressing people who don't know how the technology works, but the cutesy concept is borderline misleading.
Also, can you back up this claim with a human-written response? (emphasis mine)
Thanks for taking a look!
So our core technical moat is building up an agentic harness that can represent and take actions on any webpage without any screenshots. With this approach we even beat custom trained models like OpenAI Operator and Anthropic CUA: https://www.rtrvr.ai/blog/web-bench-results
Everyone else in the space just takes a screenshot and asks a model what coordinates to click, our core thesis is that LLMs understand semantic representations fundamentally better than vision. But with this DOM approach there is a long tail of HTML/DOM edge cases to cover that we have built out for with the 20k+ users bringing these edge cases.
Soon you will be able to record demonstration tasks via our partner Chrome Extension as well as setup knowledge bases scraped by our Cloud browsers to provide additional context to the agent. So there is a platform moat as well.
The audience is website owners who want to increase visitor engagement and conversion via a conversational interface for users.
This is more for our cloud browser platform where we launch cloud browsers for vibe scraping controlled via a custom extension instead of CDP. You can try it out at rtrvr.ai/cloud, where we can get back some strong antibot detection sites like google.com
So with the rise of cloud browser agents, identity management is a critical problem.
There are a lot of users on our rtrvr.ai/cloud platform wanting to automate sites where they have to login to and right now are resorting to including usernames/passwords in prompts (presumably these are non critical sites that they dont really care about). We are offering a more secure option of just syncing cookies from our Chrome Extension to our cloud browsers so the agents will always be logged in without any credential exposure.
You choose the specific domains for the cookies you want synced so its not like all your cookies are syncing.
We have seen people hire consultants to build RAG pipelines, maintain them only to be QA Agent when the world is shifting losing traffic to the likes of ChatGPT and Google comes up with a protocol that makes you do more work with WebMCP essentially maintain a lot of APIs for your UI and DBs. Rover is the answer to these painpoints. No RAG pipelines, just one script tag your users engage, retain, get work done on your website without ever leaving it and fully conversational.
Essentially, are you saying 'dont give data to Google, but give to us'? I can't see how the merchant websites own anything given that the script points to your website and everything else is a blackbox. Just wondering...
Hey, we have various configurations available even in this very early preview: https://rover.rtrvr.ai/docs/configuration and you can turn off telemetry so we don't collect the data . As a website you can also configure to host the script on your preferred endpoint and our 'apiBase' helps with that pointing to custom domains. And based on project data needs we are also open to tighter data policy rules.
If the agent is in your site you are at much less mercy of Google's agent to: - redirect users to another site - the user's focus is shifted from your site to whats going on in the agentic chat (in which Google can serve ads/recommendations) outside of your control. - we want to be long term partners to websites to craft the agentic experience between them and their users
"Simple, Transparent Pricing" => vibe coded product