Replit CEO Amjad Masad: Company Culture and Competitive Advantages in the Age of AI - Edtech Education

Founders' Corner

Replit CEO Amjad Masad on Culture Building and Competitive Advantage in the Age of AI

Growing up in Amman, Jordan, Amjad Masad didn’t have a computer. He learned to program on borrowed computers, or at internet cafes. Setting up and switching coding environments was a pain, so he dreamed of building a platform where anyone can learn, build, collaborate and share on the go. 

That dream, which he pursued as a side project while he worked at Yahoo, Codecademy and Facebook, is now a reality. Replit is home to over 20 million developers who have built more than 240 million projects, many of whom are earning money and making a living.

Key to Replit’s growth and growing reputation is its ability to build and ship — fast. Not surprisingly, it is one of the pioneers in building with AI, with tools like Ghostwriter and a partnership with Google for AI software development.  

Earlier this month, Amjad spoke with Reach founders about Replit’s journey and how startups can navigate today’s rapidly changing AI landscape. This is the first part of that conversation, which looks at how Replit built and maintained its engineering culture, and where competitive advantages exist in an AI-driven world. Part 2 (forthcoming) will explore what the future of education and programming may hold.

On Replit’s master plan and its progress to date.

Amjad: In our 2016 pitch deck, we had our master plan which were three bullet points: The first was to grow the population of young coders, with educators and through schools. The second is to build a collaborative environment assisted by AI. With our network of developers and that wealth of data, we could build tools that no one else could build. And the third was to get to a point where you not only learn on the platform, but can also deploy and build a business entirely.

Our vision from the start has been to be a place where you can write your first line of code. You can earn your first dollar and you can build your first startup. That’s happening now. We’re seeing people go from our free courses to earning money on our platform. We have a small microeconomy that’s growing quickly, where people can make money by doing coding “bounty” projects outsourced by others. We’re seeing people make as much as $10,000 a month doing that and earning a living.

We are passionate about people being able to program as a means of living, and not just for the sake of learning. We pay creators across the world; U.S. and India are the majority, but increasingly Nigeria, Kenya, Sri Lanka, Malaysia and many other places that are really coming online.

We’re seeing startups as well. We have a new deployment platform with a full hosting solution. From the last Y Combinator batch we’ve seen several startups built entirely on Replit.

On hiring and keeping the right culture for the mission

To really make a foundational change in how people learn how to code, to make programming more accessible and bring people in to participate in the digital economy, we had to build a lot of things from scratch.

To do that, we had to build the company culture around the idea of shipping fast, and being as lean and having as few management layers as possible. In addition to being the CEO, I’m still the Head of Engineering today. There are some personal downsides to wearing both hats, like not having much time to myself, of course.

At every stage of the company, when you see signs of bureaucracy and unnecessary processes start to seep in, you have to fight it actively. It’s a constant battle for founders to make sure that the company doesn’t slow down.

We’re now at 80 people, which may be small given our impact and footprint as a Series B company. This is intentional. We’ve always resisted the rapid headcount growth mentality, and a lot of times we actually had to reset the culture to make sure we’re growing on a strong footing and keeping a super high bar for engineering.

We’ve had this bet from the start that AI is really going to change programming. As part of that, we were committed to being a high-caliber team. Hiring a high-caliber team forces you into certain habits, and with an engineering culture it certainly helps when the CEO is also an engineer. A lot of our all-hands meetings are about technical achievement and highlighting ambitious, amazing work. It’s also just about having a lot of fun building stuff together.

On the competitive market for AI talent

In addition to running Replit, I do quite a bit of angel investing. Even companies started by the best AI people are finding it hard to hire people. Google, OpenAI and the likes are paying salaries you would not even imagine.

What I would suggest to startups: Don’t try to hire the top AI researchers and rockstars. At Replit we’ve had luck with new grads out of school who have done some ML and AI work, and also with regular, seasoned engineers who want to move into AI.

The cool thing about LLMs and transformers is that they are not like traditional ML, which is very laborious, involving tinkering with data and labeling and other grind work. Right now, a lot of AI building is just typical engineering. So I would just hire really great engineers who want to learn AI, and maybe hire a couple of ML people just to have that deep knowledge.

On transformers as the new industry standard (and why it’s not worth trying something else)

When the technology industry standardizes on something, a huge amount of effort goes into the software and hardware layer to optimize that thing. The transformer model, which is the underlying technology behind LLMs, and diffusion models are becoming the standard. With their H100 chips, NVIDIA is optimizing the transformer on a hardware level. 

I would really advise against venturing out and doing something else. Transformers are very versatile. You have the GPTs of the world that you can program via prompting. You have LLMs, which you can fine tune. You can grab a model off of Hugging Face and fine tune it for spam and abuse, which is what we do. 

We still haven’t fully explored the full extensive capabilities for transformer models. There’s so much to do, even just in prompt engineering. There are so many prompting strategies — chain of thought, tree of thought and many others coming out — that can give you a big improvement over the base models.

On closing the delta between research and development — and how this will be a key competitive advantage

Transformers have also made the hurdle for applying machine learning much, much lower. What I would keep an eye on are the different ways of using transformers and LLMs, and pay attention to the latest research coming out.

Right now the delta between research and applied development has shrunk significantly. When we see a paper like FlashAttention, we literally implement it next week. Twitter (or X) has been a place where a lot of researchers were sharing this information, though it’s become quite noisy as of late. But there are other sites where people are sharing the latest papers and learnings. Keeping an eye on cutting edge research is important to staying competitive.

This shouldn’t be a top-down directive. If you have a team that’s passionate about this work, create a Slack channel, let people post research and news on the latest advancements and what other people are doing. Then you’ll see discussions and ideas about what might make sense to try.

To make this work, you need a culture that is not so roadmap driven. Things are coming out so quickly today that you should be more dynamic and less rigid. People often get tied to roadmaps a lot. If you do, you’re not going to be able to respond to changes quickly.

On build versus buy

Our internal bias is to build first. But on a strategic level, it’s important to think about what are the layers that are going to commodify, and what are the layers where differentiation happens, which is where you want to build to capture value.

For example, the cloud is already semi-commodified. It’s really hard to compete because there are already three, four big players, so we’re not going to build the data centers and the basic sort of cloud components. We’re building the abstraction layer on top of these components, which is the differentiated layer.

With regards to LLMs, it’s a little trickier to forecast where this plays out. From my vantage point right now it feels like it’s commodifying because the interface for LLMs is text; they all have the same kind of interface and use natural language. When OpenAI added function calling, people thought that was how they were going to start differentiating. Then, two weeks later, open-source models also had function calling. As long as the interface and the capabilities for these models are known, within a certain timeframe they will sort of commodify.

I would not try to build a lasting advantage, especially as a startup, by training LLMs. You might have to train the model because no one has done the work for you, and you need to use some open source model and fine tune it in a certain way, or you need to improve your operating margins. But it needs to be driven on need. I would buy commodities and build differentiation.

Read Part 2 for Amjad’s thoughts on what the future of education and programming may hold.