If you have worked in building software for some years, possibly more than once the work has been either "some changes" on an existing project or a completely new greenfield project. Already working a.k.a brownfield products have users using it whereas new projects don't have the volume of users till it goes fully on production in some way. In this post, we will evaluate the differences in the mindset software engineers need to have for a stable software product vs a new greenfield project. Let's get started!

There are multiple ways to read a file line by line with Node.js. In Node.js files can be read in sync way or in an async way. With the async path, it is possible to read large files without loading all the content of the file into memory.

Reading the whole file at once will make the process memory intensive. With the ability to load and read a file line by line it enables us to stop the process at any step as per need. In this post, we will look into 3 ways to read a file line by line using Node.js with memory usage comparison.

Web scraping is the process of extracting data from a website in an automated way and Node.js can be used for web scraping. Even though other languages and frameworks are more popular for web scraping, Node.js can be utilized well to do the job too. In this post, we will learn how to do web scraping with Node.js for websites that don’t need and need Javascript to load. Let’s get started!

Using RabbitMQ with Node.js to offload the things to process in the background is very useful. Adding Docker and docker-compose in that mix for local development makes setting up RabbitMQ and node.js a breeze. In this post, we will explore how to set up RabbitMQ and Node.js with docker and docker-compose using a dummy send email example, let's get rolling!

More posts can be found in the archive.

Stay Connected

Follow me on LinkedIn for new posts, engineering insights, and tech takes — straight from the trenches.

Follow on LinkedIn  →