Brief Introduction About High Performance Computing

By definition, supercomputers are the fastest and most powerful computers available, and at present, the term refers to machines with hundreds of thousands of processors. They are the superstars of the high-performance class of computers. Personal computers (PCs) small enough in size and cost to be used by an individual, yet powerful enough for advanced scientific and engineering applications, can also be high-performance computers. We define HPC as machines with a good balance among the following major elements:

Multi-staged (pipelined) functional units.
Multiple central processing units (CPUs) (parallel machines).
Multiple cores.
Fast central registers.
Very large, fast memories.
Very fast communication among functional units.
Vector, video, or array processors.
Software that integrates the above effectively.

As a simple example, it makes little sense to have a CPU of incredibly high speed coupled with a memory system and software that cannot keep up with it.

High computing and supercomputers are often associated with large, government-funded agencies or with academic institutions. However, most High-Performance Computing today is in the commercial sector, in fields such as aerospace, automotive, semiconductor design, large equipment design and manufacturing, energy exploration, and financial computing.

HPC is used in other domains in which very large computations such as fluid dynamics, electromagnetic simulations, and complex materials analysis must be performed to ensure a high level of accuracy and predictability, resulting in higher quality, and safer, more efficient products. For example, HPC is used to model the aerodynamics, thermal characteristics, and mechanical properties of an automotive sub-assembly or components to find exactly the right design that balances efficiency, reliability, cost, and safety, before spending millions of dollars prototyping a real product.

Over time, the growing use of High-Performance Computing in research and in the commercial sector, particularly in manufacturing, finance, and energy exploration, coupled with a growing catalog of Computing applications, created a trend toward HPC platforms built to handle a wider variety of workloads, and these platforms are constructed using more widely available components. This use of commodity hardware components characterizes the cluster and grid era of High Performance Computing. Clusters and grids continue to be the dominant methods of deploying High Computing in both the commercial and research/academic sectors. Economies of scale, and the need to centrally manage computing resources across large organizations with diverse requirements have resulted in the practical reality that widely divergent applications are often run on the same, shared HPC infrastructure.

High performance computing can happen on:

workstation, desktop, laptop, smartphone!
supercomputer
Linux/MacOS/Windows/… cluster
A grid or a cloud
Cyber Infrastructure = any combination of the above

Afterlife Bots – A Dead Man’s Petition

No, neither am I a Ted-famous Tech geek spiritual guru nor am I in contact with the afterlife. I am just fascinated by the buzzwords “Machine Learning” and “AI” and a little overwhelmed by the number of articles mentioning those words on my news feed.

I remember reading a line in a news article that “Bots are getting better at imitating humans”. Why not hire one and decrease workload by 50%. Well, I suppose we are working towards it.

Google recently announced that their AI-enabled assistant (with 6 voices) can book a hair-cut appointment seamlessly (Well I want a shave as well, and I want it to go and do grocery shopping handpicking the freshest tomatoes from the lot).

Jokes apart: kudos to the team of brilliant scientists, engineers, and others who are working day and night to make this happen.

Coming back to my original story.

Let’s start with Human life (and relationships) – Data Gathering

“Quite a digital world”. We are capturing and storing our personal life events as much as we can digitally (Thanks to social media, external hard disks, and pen-drives). Why not store our entire life in a 1000 Petabyte storage device. Capture every second – actions, events, habits, decisions, etc. Imagine if we can see and experience our parents’ childhood or see “What all Mahatma Gandhi did in his entire life”. Interesting right?

We all know how quickly robotics, machine learning, and AI are evolving.

What if we combine robotics, machine learning, and human life data? Can we create a human replica bot which would respond similarly, make decisions similarly, have similar habits basis the 1000 Petabyte data fed. All in all, can that bot be my replacement after my death?. Can it be my AFTER-LIFE BOT?

Literally, nothing can replace a dead human being. I was not fortunate to see my grandfather or meet him. But will my great grand/grandkids know about me? The answer is I do not know. We all are striving hard to leave a legacy behind us. Why not use robots and machine intelligence to duplicate ourselves. We do have ample amount of data to feed ~79 years (average age of human being) or ~2 Billion moments. Don’t you want your great grandkids to remember you after you are gone?