Tech debt rears it's head in many guises. Here are some more examples of the kind of things that can hide in web projects:
It won't scale
If you design a solution to be infinitely scalable - more accurate to say 'massively scalable' - it will probably take longer to build that it needed to.
Even if you're absolutely certain it's going to need to meet that scale, it's still sensible to build a smaller, leaner iteration before jumping right in.
Agile says when building a sportscar, that you should build a wheel, then a skateboard, then a buggy, then a compact, then eventually evolve it right the way through to your target sportscar.
This approach helps identify unknowns early, before the design becomes too intransigent to change.
If you've taken an agile approach then hopefully you've deliberately accrued some tech debt around scalability.
It's a monolith
When building a solution, it's typically slightly quicker to build it as a single large program.
Now this quickly becomes untrue, and purists would argue that it is actually faster to design and build microservices from the start, but judging by how many teams have built monoliths I'd argue that unless prompted, that tends to be how big projects go.
When faced with a monolith, it becomes very difficult to maintain and support that code:
Maintenance releases typically require downtime.
Parallel running instances of the codebase create concurrency issues.
Pinch points accumulate around data write operations.
Business domains get conflated into single codebases and lose flexibility.
The solution is to build out microservices whose high-level designs follow that of the business or organisation they serve.
Splitting a large monolithic codebase into microservices can be a particularly tricky remediation activity.
This can be a particularly toxic form of tech debt because it's so costly to sort it out later.
If anything fails, it all fails
It takes time to go back and swap out those single instances for managed clusters (horizontal scaling) and more time to ensure that there's no scenario where an elaborate combination of failures means that you lose data.
Investing that time too early is wasteful, so it's worth accumulating some technical debt, because it's going to be remediated later... just so long as you do actually remediate it later!
Coding for failure is another area where it's best not to accumulate the debt in the first place, because fixing it retrospectively is tough.
You didn't do TDD
There are robust development methodologies that should make it impossible, but sometimes unit and integration tests are:
functionally incomplete
missing key areas of the sites functionality
don't cover all the code
haven't been kept up-to-date as the product has evolved
Nothing's in production yet
If Sprint 0 is about setting the team up, sprint #1 is entirely about getting to production.
Sadly it's Sprint 26 and while your alpha testing has gone really well and you've got some great user feedback, you've nothing in production.
I see it over and over again that teams leave production to later, or worse expect to hurl the code over the wall to an operations team who "look after live".
You build it, you run it.
I'm a firm believer that having an app in production as early as possible is a great discipline for a team.
If you can get into a regular release cadence, that empowers everyone in the team to embrace the DevOps methodology that sees Development and Operations collaborating right through the software development lifecycle, you're going to go far.