A look back to supercomputing at the turn of the century
When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …
Years ago, when Artificial Intelligence (AI) began to emerge as a potential technology to be harnessed as a powerful tool to change the way the world works, organizations began to kick the AI tires by exploring it’s potential to enhance their research or business. However, to get started with AI, neural networks needed to be created, data sets trained, and microprocessors were needed that could perform matrix-multiplication calculations ideally suited to perform these computationally demanding tasks. Enter the accelerator.