Single Threaded vs Multi-threaded languages
Nodejs - Single-threaded
Golang/Java - Multi-threaded
Terminal command to check OS process & cores running with threads - htop
Nodejs is single-threaded, hence if I have a very heavy process running on nodejs, the process would take up a single thread and utilize the entire core to that one thread, reducing performance.
The event loop in Node.js is responsible for managing the execution of JavaScript code, and it runs on a single thread. This means that any CPU-intensive or blocking task within your Node.js application, if not offloaded to worker threads or external processes, can monopolize one core and potentially lead to poor performance for other parts of your application that depend on the event loop.
Concurrency:
Concurrency is the concept of making progress on multiple tasks at the same time, even if they are not necessarily running simultaneously.
It does not require multiple physical processors or CPU cores but can be achieved on a single core through task switching.
Concurrency is often used to improve the efficiency of a program by allowing it to perform useful work while waiting for I/O operations, such as reading from a file or making a network request, to complete.
In a concurrent system, tasks may overlap in execution, but they are not necessarily running at the exact same moment. Context switching is used to give the appearance of simultaneous execution.
Parallelism:
Parallelism, on the other hand, involves the simultaneous execution of multiple tasks or processes, where each task runs on its own physical processor, CPU core, or thread.
It typically requires hardware with multiple cores or processors to achieve true parallelism.
Parallelism is used to speed up computation-intensive tasks by breaking them into smaller sub-tasks that can be executed in parallel on separate cores.
Parallelism results in actual simultaneous execution, making it suitable for tasks that can be divided into independent parts.
Single Core:
Requests would stack up, and now asynchronous operations are running, which means that when the code reaches the promise, it'll just in the meanwhile add it to a queue under the hood, and the next request to be processed begins.
Can be tested using the ApacheBench library. (ab -n 1000 -c 16 http://localhost:3005 (1000 requests, 16 at a time). Shouldn't ideally be tested by localhost:3005 in the browser since the browser would more likely than not rate limit the number of requests to the same server.
Now, if we take some golang code (generated using GPT):
Here, what we do is create worker processes on each core present in the system, and for the first process I start my cluster.isMaster would be true, and for the worker ones, it would be false.
So when I send 100 different requests, now, all my cores in parallel would be used since they all have nodejs processes running on them.