Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.


NinjaTech AI, a Silicon Valley-based generative AI company, has announced the public beta launch of its new AI agent service Ninja AI, a web application designed to act as a researcher, software engineer, scheduler/secretary, and more. Users can try it out themselves now at myninja.ai.

The service operates a free tier and paid subscriptions. For paid users, it provides access (through the application programming interfaces or APIs) to a number of leading generative AI models — including OpenAI’s GPT-4o, Anthropic’s Claude 3, and Google Gemini.

Users can select between these AI models to power a number of underlying operations simply by typing in natural language to Ninja AI, including web search, a “deep search” performed by a more thorough AI agent powered by Llama 3.

It can compare results from multiple AI models in realtime using GPT-4 (summarizing the commonalities and flagging the differences), schedule calendar events on behalf of the human user and avoid conflicts (doing so requires logging in with one’s Google account for Google Calendar, and an Apple iCal integration is on the roadmap for the near future, as well).

VB Event

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Request an invite

It can further email invitations to human recipients autonomously (this the agent can do on its own from its own email address), speak to the user through different voices, even video chat to them with Unreal Engine-powered 3D characters.

And Ninja AI can do many of these tasks simultaneously and asynchronously, operating in the background while the user does other tasks with Ninja AI or elsewhere on their devices.

The service will ping them when it completes an operation, and users can check in on each workflow by clicking a sidebar. Also, unlike ChatGPT and other more consumer-facing AI chabots, the user can type multiple requests into Ninja AI at once and it will try to do them all in the order the user asks.

“Not everything is about question and answer,” said Babak Pahlavan, founder and CEO of NinjaTech AI, in a video chat interview with VentureBeat. “Especially in real world, you need human assistants to deal with software but also other humans.”

The five key agents offered by Ninja AI right now include:

  • Ninja Advisor
  • Ninja Coder
  • Basic Scheduler
  • Real time Web Search
  • Limited third-party LLM access

Paid tiers offer more tasks

Ninja AI is designed to have a “generous” free option in the words of Pahlavan, offering users up to 20 tasks daily from the Advisor, Coder, Researcher, and third-party LLMs, as well as 5 tasks daily from the Scheduler agent

But users willing to pay $10, $20, or $30 monthly can get many more tasks daily and monthly as a result.

Pahlavan previously spent more than a decade at Google, concluding his time there in 2022 as a Senior Director of Product Management after overseeing various enterprise software verticals.

“My insight was we needed something that’s above and beyond question and answering systems, no matter how smart they get,” Pahlavan told VentureBeat. “And we almost did it within Google.”

After leaving the search giant, he joined the nonprofit scientific research group SRI International as an entrepreneur-in-residence, which is where the seeds of NinjaTech AI and Ninja AI were planted.

Pahlavan’s co-founders include Sam Naghshineh, formerly an engineer of hyperscale systems at Meta, who now serves as NinjaTech’s chief technology officer (CTO) and Arash Sadrieh, a close friend of Pahlavan’s and former senior applied scientist at Amazon Web Services (AWS), who now serves as NinjaTech’s chief science officer.

“We are basically are putting together everything we have learned from Google, AWS and Meta about how to build software systems at scale in a global setting,” Pahlavan said.

Photo of NinjaTech AI co-founders (L-R): Arash Sadrieh, Babak Pahlavan, Sam Naghshineh. Credit: NinjaTech AI

Ambitious goal of putting multiple AI models to work under one roof

The goal is provide a service for busy professional consumers (prosumers) to get the most out of AI right now, without waiting for some theoretical future breakthrough in the form of new models.

“The advisor agent has been trained on several thousand human, multi round conversations,” Pahlavan explained. “So it’s meant to be thoughtful, kind, clean, professional, suitable for work environments.”

Furthermore: Ninja AI is designed to allow users get the benefits of multiple AI models working together under one virtual roof on their behalf — without having to manually open different models and tab over.

“The core competency of the company is about agents that can break down complex tasks, come up with a plan dynamically, and then activate tools that are at our at its disposal in order to execute those tasks, either in real time, asynchronously until the task is done, or until it gets blocked and it has a question for the user,” stated Pahlavan.

Powered by AWS silicon

Interestingly, NinjaTech AI eschews the leading generative AI chips — graphics processing units (GPUs) from Nvidia — in favor of Amazon Web Services (AWS) custom machine learning chips manufactured by partner Taiwanese Semiconductor Manufacturing Company (TSMC).

NinjaTech used these chips, Trainium and Inferentia2, and Amazon’s cloud-based machine learning service, Amazon SageMaker, to build, train, and scale its AI agents and allow them to perform multiple tasks simultaneously without driving up costs for the young startup.

“We trained all the models using AWS Trainium via Sagemaker, and then everything is also being served using Trainium and Inferentia,” said Pahlavan.

Rahul Kulkarni, Director of Product Management at AWS, commented on the collaboration’s benefits in a video call interview with VentureBeat, emphasizing that the custom silicon was supported and designed to work best with Amazon Sagemaker.

“It’s not just delivering silicon, but it’s also delivering the right framework in the software capabilities to make sure that it can be used by companies like NinjaTech,” Kulkarni noted.

But how much cheaper are Inferentia2 chips than comparable Nvidia GPUs? AWS says customers can expect a 40% better price per equivalent performance — up to a point.

For super demanding, computationally heavy operations, GPUs are still the way to go, and AWS offers them as well through its elastic cloud (EC2) service.

“Our partnership with Nvidia continues to flourish and we will offer the most up-to-date infrastructure at scale,” said Kulkarni. “But equally important for us is our custom silicon initiative.”

With a team of experts from Google, AWS, and Meta, NinjaTech AI aims to redefine productivity by enabling users to delegate time-consuming tasks to their AI agents, thereby focusing on more strategic activities.

Source link