The high-level process would be simple.
Decide how many of those teraflop machines will be bought and how many will be built (read, assembled) locally. Seek expressions of intent from vendors who are milling around. Consult the users at the planning stage and during the purchase of hardware because different real world problems require different architecture. Call for proposals in a transparent, phased manner, depending on what pressing problems your core team of experts have identified and then get on with releasing money and solving those problems. Sounds like a plan, in any given circumstance.
This is where intent and actions differ, unfortunately. Because two years later, not one supercomputer has been installed.
It can’t possibly be because no one supposedly knows how many of these will be bought and how many have to be assembled. “Earlier the plan was half of the supercomputers would be bought to get the research started and then half of the systems would be integrated,” says a technology vendor, who requested not to be named because he is still involved in the project. “As vendors, we have given technology specifications and roadmaps several times. An expression of intent was sought but nothing has moved beyond that.”
There are various committees set up, for R&D, infrastructure, applications and so on. But they are all at sea. “Committees have been meeting but have no clue how to proceed, or to what deadline. They discuss, go back, meet again, discuss and go back,” says one committee member in Bengaluru. He requested not to be named.
For a mission that was meant to revive India’s place in the global Top 500 supercomputers list, create a cadre of professionals and get the Indian industry hooked to HPC, it has become an old boys’ network. A network that, two years into the project, isn’t even articulating what it intends to do.
Supercomputing at the speed of word of mouth
Late last year, the following email from an academic at the Indian Institute of Science (IISc) was sent to a few people. Three, to be precise.
That was the call for proposals of a priority supercomputing mission—by word of mouth. The email writer does not say in which capacity he is soliciting the proposals. Or why it is not being done through the NSM website? Perhaps because he realised the site was, ironically, frozen in time (or was until the publishing of this article) with no useful information, not even a ‘Contact us’ link.
Unlike a typical office project, calls for academic proposals are governed by data protection acts. It would be considered a deeply unethical act for any of the reviewers of the proposal to leak its contents.
“Why on earth would I respond to a random email, asking for a proposal, which, essentially, is a giveaway on the kind of scientific problems I want to work on,” says a physicist who works in this field and did not receive this email directly but stumbled onto it through three degrees of separation.
Now, contrast this with what India’s space agency, Isro, did last August when it sought proposals for its Mars-2 mission. An open, public call that laid down all the rules:
And remember, Mars-2 is a much smaller project than NSM. (While the Mars-2 budget is not public yet, it can be estimated from the Mars-1 budget, which was Rs 447.39 crore or ~$67 million)
It’s not just the size, the complexity of HPC is rarely fully understood. A person leading any of the multiple committees must know many overlapping areas: They need to know their own science and numerical analysis. Then they need to know programming and hardware reasonably well. Unless people understand these, it’d be hard to gauge what is the real world problem that can be solved in 5 years, 10 years or what cannot be solved even in 20 years. Not very many people in the country are conversant with HPC in this context. So, it isn’t surprising to learn they are groping in the dark. Widespread HPC is not more than 20-25 years old. It proliferated and took off when supercomputers could be made from commodity hardware in the late 1980s.