
When Microsoft chairman Bill Gates belatedly revealed .NET in June 2000, he offered a vague and confusing look at the future. But things got much more specific about a month later, when the software giant hosted its Professional Developers Conference (PDC) 2000 in Orlando, Florida. There, for the first time, developers learned about the various technologies that would make up .NET, a new programming language called C#, and a new version of the Visual Studio integrated development environment (IDE). They also heard the first official word on the next two versions of Windows, codenamed Whistler and Blackcomb, the former of which I had first leaked to the world.
I’ll have more to say about all that soon. For now, let’s focus on the reality of .NET as opposed to the marketing smokescreen that Microsoft had presented one month before PDC 2000. Starting with why .NET even exists. What problems did it hope to solve?
As Richard Campbell explains in his excellent History of .NET talks, .NET wasn’t the result of a years-long strategy or some single visionary leader. Instead, it was the culmination of three separate but related efforts to advance various aspects of Microsoft’s software development stack.
The first involves runtimes, the environments in which software code runs. In the Microsoft world, there were several different runtimes by the beginning of the 21st century, each with several different versions. The most obvious examples are Visual Basic—where the file VBRUN300.DLL represents the popular Visual Basic 3.0 runtime—and Visual C++, which supported multiple runtimes.
Before Visual Studio arrived in 1997, each of Microsoft’s programming languages and environments wastc developed and sold separately. And in addition to having different runtimes, each had different IDEs, different ways of interacting with the system, different extensibility models, and other differences. Visual Basic had a graphical forms designer that made creating Windows user interfaces easy enough for beginners. But Visual C++, despite its name, did not. (Well, for the most part. Over time, Microsoft did add limited graphical UI capabilities to that product.)
The goal with Visual Studio was to make Microsoft’s developer solutions more consistent and integrated with each other. But that would have to happen over time. The initial release of the product, Visual Studio 97, was simply an Office-style bundle of several previously separate environments, including Visual Basic 5.0, Visual C++ 5.0, Visual J++ 1.0 (for Java), Visual InterDev 1.0 (for Active Server Pages (ASP)-based web development), Visual FoxPro 5.0, and the Microsoft Developer Network (MSDN) documentation for each. Each had its own user interface and capabilities.
Its successor, Visual Studio 6.0, arrived in late 1998 and provided an integrated IDE for both Visual InterDev 6.0 and Visual J++ 6.0—both saw their version numbers incremented to match both Visual Studio and the core products it contained—and a consistent component model across most solutions. But each still offered their own runtimes.
The problems with Microsoft’s multiple runtimes were as numerous as they were obvious. Every time Microsoft added a new feature to Windows—and every Windows version added dozens if not hundreds of new features—each of the software giant’s developer teams would work to add support to those features to their environments, each was completely different and on its own schedule, and each required a new runtime version. This was a key contributor to “DLL Hell,” as individual first- and third-party applications would require not just specific runtimes, but specific runtime versions to be installed for those applications to work properly.
Behind the scenes, a team at Microsoft—key members included Visual SourceSafe creator Brian Harry and Jason Zander, who now runs Microsoft Azure—was working to create a single runtime that would work across multiple Microsoft developer environments and help ease the DLL Hell issue. This common runtime would be available not just to Microsoft’s internal teams but also to third-party developers that created their own developer environments and languages. Then, Microsoft could simply update this common runtime once, with support for new Windows features as they arrived, and they would be immediately available to any solutions that supported it.
The second effort involved finding a clean, object-oriented programming (OOP) language and class library that could replace Java and the Windows Foundation Classes (WFC), respectively, after Sun sued Microsoft for usurping its developer environment. Fortunately, Visual J++ and WFC creator Anders Hejlsberg was on hand to solve that problem: He created a C-like object-oriented language, or COOL, that would be simpler than C++ and would utilize a common set of base class libraries that would be simpler than MFC and as sophisticated as WFC.
The third effort that led to .NET involved an enterprising new Microsoft recruit, Scott Guthrie, who had joined Microsoft in 1997 and first worked on the NT Option Pack, which brought Microsoft’s web server, Internet Information Server (IIS), and the server-side ASP capabilities (among other things) to Windows NT 4.0. Mr. Guthrie was unhappy with ASP and felt that this environment, which let you comingle server-side VBScript or JScript scripts within HTML and access server-side COM components, was unsophisticated and wouldn’t scale to meet the needs of the future. And so he created something he called ASP+ that was based on Java and completely object-oriented. It took him only one month.
When Guthrie showed his project to the team in January 1999, it was universally embraced except for one thing: He would need to replace Java with something else because of Microsoft’s ongoing legal issues with Sun.
That these three efforts are related is perhaps obvious today, but each was trying to solve a specific issue, and they were originally separate and unaware of each other. What’s interesting is that each was likewise complementary, and all three, combined, formed the basis for a new platform.
The runtime effort would eventually be known as the .NET Common Language Runtime (CLR). COOL morphed into C#—pronounced “see sharp”—and its class libraries would become the .NET Framework and would work consistently and identically with any CLR-based language. And ASP+ would adopt C#, which was fortuitously just coming together at that time, and it would be renamed to ASP.NET as Microsoft’s suddenly cohesive vision for a future of Next Generation Windows Services (NGWS) morphed into Next Generation Web Services (also NGWS) and then a less mouthy name, .NET. (“Dot net.”)
Those early .NET codenames speak to an interesting bit of positioning: .NET was very much a Microsoft platform and not an open platform. To that time, each Microsoft developer environment had targeted Windows specifically or, in more recent years, web development in which Windows Server and/or a Windows DNA server was used on the backend. With .NET, Microsoft would create a single all-encompassing environment that would span Windows clients and servers and would facilitate using web services to communicate between the two. It would also work on Windows Mobile smartphones and PDAs, and on other Microsoft platforms to come. What was left open-ended was support for non-Microsoft platforms: the firm would allow third parties to use .NET on platforms like the Mac and Linux, opening them up to any .NET developers in the future.
This interesting loophole was barely noticed at the time. But it would have a major impact on the future of .NET and on Microsoft, both of which later evolved to fully embrace the open-source world. But that, of course, is a story for another day.
Next up: Microsoft explains .NET to developers.
With technology shaping our everyday lives, how could we not dig deeper?
Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.