Everyone in AI is talking about Manus. We put it to the test.

Despite all the hype, very few people have had a chance to use it. Currently, under 1% of the users on the waitlist have received an invite code. (It’s unclear how many people are on this waitlist, but for a sense of how much interest there is, Manus’s Discord channel has more than 186,000 members.)
MIT Technology Review was able to obtain access to Manus, and when I gave it a test drive, I found that using it feels like collaborating with a highly intelligent and efficient intern: While it occasionally lacks understanding of what it’s being asked to do, makes incorrect assumptions, or cuts corners to expedite tasks, it explains its reasoning clearly, is remarkably adaptable, and can improve substantially when provided with detailed instructions or feedback. Ultimately, it’s promising but not perfect.

Just like its parent company’s previous product, an AI assistant called Monica that was released in 2023, Manus is intended for a global audience. English is set as the default language, and its design is clean and minimalist.
To get in, a user has to enter a valid invite code. Then the system directs users to a landing page that closely resembles those of ChatGPT or DeepSeek, with historical sessions displayed in a left-hand column and a chat input box in the center. The landing page also features sample tasks curated by the company—ranging from business strategy development to interactive learning to customized audio meditation sessions.
Like other reasoning-based agentic AI tools, such as ChatGPT DeepResearch, Manus is capable of breaking tasks down into steps and autonomously navigating the web to get the information it needs to complete tasks. What sets it apart is the “Manus’s Computer” window, which allows users not only to observe what the agent is doing, but also intervene at any point.
To put it to the test, I tasked Manus with three assignments: (1) compile a list of notable reporters covering China tech, (2) search for two-bedroom property listings in New York City, and (3) nominate potential candidates for Innovators Under 35, a list created by MIT Technology Review every year.
Here’s how it did:
Task 1: The first list of reporters that Manus gave me contained only five names, with five “honorable mentions” below them. I noticed that it listed some journalists’ notable work while not others. I asked Manus why it did this. The reason it offered was hilariously simple: It got lazy. It was “partly due to time constraints as I tried to expedite the research process,” the agent told me. When I insisted on consistency and thoroughness, Manus responded with a comprehensive list of 30 journalists, noting their current outlet and listing notable work. (I was glad to see I made the cut, along with many of my beloved peers.)