Imagine if Microsoft had not been so dead set on using C# to push their operating system, but instead had spent significant resources on making C# available on Linux in a manner that wasn't upsetting to the open source community.
Rather than the "fan fiction"-variant of the platform that is Mono (which is forever condemned to be the poor-man's alternative), there could have been a viable .NET platform on Linux today -- and other Unixen.
If Microsoft hadn't seen C# and .NET as a tool for securing an audience for their OS they could have owned software development. People like to malign .NET but I don't think they don't do this because there is anything inherently bad about C# and .NET -- they do it because .NET implies Windows and Windows isn't exactly the favorite OS of open source developers. Or closed source developers for that matter.
It is easy to speculate that Java would have atrophied and died if Microsoft had targeted Unixen. It is certainly hard to imagine there'd be meaningful collaboration between Sun and Microsoft (thanks to stupid IPR laws, companies are encouraged to waste their time bickering over meaningless trivialities).
Perhaps both would have matured at an accelerated rate? Perhaps not.
In any case, Microsoft committed a strategic blunder by not ensuring they own the developers. I think might have been a mistake that is best measured in hundreds of billions of dollars.
v. gorged, gorg·ing, gorg·es
1. To stuff with food; glut: gorged themselves with candy.
2. To devour greedily.
To eat gluttonously.
I'll admit it. I'm an odd sort of Java programmer. I've been programming Java since it first came out and I wasn't terribly enthusiastic about it for many years. In part because there were features lacking, but also because Java needed to settle and mature and all of us needed to learn how to deal with object orientation. OO in the 90s sucked. Sadly some of that suckage is still with us.
When I finally started using Java as my primary language it was because it had finally grown some features I had been missing. The 1.4 release was, to me, the first really useful release (although it can be argued that much of what made 1.4 useful didn't really work until 1.5 or so).
(Update: some people asked "what features". Mainly asynchronous socket interfaces in the shape of NIO -- though truth be told, the NIO implementation in 1.4 was so buggy it almost didn't work and I remember spending a lot of time figuring out workarounds so that our production code would hum along happily. NIO also had some subtle design quirks one has to be aware of and design around).
For the first 5-6 years of using Java as my primary language I managed to write code that had, by today's standards, exceptionally few third party dependencies. For years I used JUnit, Jetty and a few dependencies that I think Jetty demanded I include.
I also quickly figured out that in order to be able to deploy applications without turning life into a miserable easter-egg hunt for everyone else I had to package my applications into single JAR apps. It amazes me that people still deploy Java web applications into stand-alone containers rather than embedding everything and saving users from whole classes of headaches.
I'm still careful about adding dependencies on third party frameworks because it isn't nice to other developers. Whenever you add a non-trivial framework to your codebase, you are increasing the amount of knowledge you demand from people who need to understand your code. It is no longer enough to just know Java -- you have to know about the framework in question as well.
And most Java programmers don't even know Java all that well.
For badly designed frameworks the amount of confusion and frustration you get quickly overshadows any gain -- real or imaginary. Lots and lots of frameworks are badly designed in that they create more work and more frustration. This ranges from libraries that represent just really bad design (like OpenSAML) to entire ideas that are just bad (OR mapping frameworks -- seriously, do you really think trading a bit of typing for a lifetime of wondering what the heck is actually taking place under the hood is really worth it?)
Also, even though Maven will happily download half the Internet for you in order to resolve dependencies, things tend to break more easily the more third party code you depend on.
I can't remember who said it, but I think one of my colleagues at Comoyo said something along the lines of: "we obsess over our own code with code reviews and such, but we happily depend on any piece of shit some crazy monkey on the net has cobbled together in his spare time".
Think about that. Because it is true.
Right now your Java application probably contains a dozen version conflicts that you have not noticed. The more common variety being the same library occurring in your transitive dependency graph more than once and in different versions. It is sheer dumb luck that your application works at all. If you don't know what I am talking about I suggest you have a close look at your transitive dependency graph.
Here are a few rules I try to live by:
- Be nice to your fellow programmer. Do not assume that they are interested in spending time replicating your development and runtime environment. If you assume more than the JRE/JDK and perhaps Maven or Ant you are being a bit of a dick. It should be possible to check out, build and run your project without reading a manual and going on an easter egg hunt.
- Try not to add frameworks that fundamentally changes the programming model. IOC, AOP and such are good fodder for conferences and provide consultants with a great market for wrangling out-of-control code. But it isn't nice to take over a project that is infested with all manner of confusing shit that will go out of fashion at some point anyway.
- Try not to add dependencies that come with huge dependency graphs themselves if you can help it. Much of that code will have been written by "crazy monkeys".
- Look in the Java standard library first.
- Have a lower threshold for dropping dependencies than for adding them.