Kyoto University, a top research institute in Japan, recently lost a whole bunch of research after its supercomputer system accidentally wiped out a whopping 77 terabytes of data during what was supposed to be a routine backup procedure.
That malfunction, which occurred sometime between Dec. 14 and Dec. 16, erased approximately 34 million files belonging to 14 different research groups that had been using the school’s supercomputing system. The university operates Hewlett Packard Cray computing systems and a DataDirect ExaScaler storage system — the likes of which can be utilised by research teams for various purposes.
It’s unclear what kind of files were specifically deleted or what caused the actual malfunction, though the school has said that the work of at least four different groups will not be able to be restored.
BleepingComputer, which originally reported on this incident, helpfully points out that supercomputing research is, uh, not super cheap, either — costing somewhere in the neighbourhood of hundreds of dollars per hour to operate.
“Dear Supercomputing Service Users,” the post begins (translated to English via Google). “Today, a bug in the backup program of the storage system caused an accident in which some files in / LARGE0 were lost. We have stopped processing the problem, but we may have lost nearly 100TB of files, and we are investigating the extent of the impact.”
Supercomputing differs from normal computing due largely to its speed and its ability to leverage multiple computer systems to process complex mathematical calculations. Its advantages over normal computing make it a valuable tool for research into a whole range of areas, including climate and atmospheric modelling, physics, vaccine science, and everything in between. Unfortunately, all of that is meaningless if your machine fails to work properly.