In my last post about the base technology I wrote that we need ways to starter higher in the chain of existing software. The place where you can start building your new software is a place between the minimum technology needed and the maximum technology possible.
These two points don't have to lie on the same line. An example could be a new web search engine. In this example the minimum point is a C compiler and a Linux server.
The maximum technology that's currently existing is technology behind Google.
Our current base would be a webserver, like Apache with PHP and MySQL running on it. This is a normal situation on shared hosting servers. Or maybe Perl with Plack and a web framework.
It's easy to see that this base situation is not even close to being an actual search engine. It would be really hard to copy Google and even harder to get the actual source code and data.
It could be possible to just type the command
create-search-engine on a
running server and everything would set itself up. The problem is that it just
doesn't work like this.
Of course a web server is a really big example, spanning many hundreds of servers and many thousands of processes.
The thing is, that even the smaller examples are not that simple to set up. How about setting up a website, that doesn't need you to know about PHP, SQL, Apache, vhosts, DNS, HTML, CSS, templates and more of those things. And what if you like to set up different modules. Like a weblog, a webshop, a few pages, a FAQ, an order processing backend, a bug tracker or a newsletter module. And then link between these modules. And have them be developed by different developers. And add your own modules. And have the same visuals. On your own server(s). Or hosted somewhere else. And access to the source code.
How hard could it be?