How to Install React JS in Windows

Reading Time: 4 minutes

React.js (React) is an open-source JavaScript library useful in building user interfaces. React is a library so our main focus for this article is installing a JavaScript environment and a Package Manager so that we can download and install libraries including React.

When we are done, you will have a React environment you can use to start development on your Liquid Web server.

 

Install Node.js

The first step is to download the Node.js installer for Windows. Let’s use the latest Long Term Support (LTS) version for Windows and choose the 64-bit version, using the Windows Installer icon.

Once downloaded, we run the Node.js installer (.msi fuke) and follow the steps to complete the installation.

Now that we have Node.js installed, we can move on to the next step.

 

The Command Prompt Environment

We’ll need to use the command prompt (command line) to interact with Node.js and the Node Package Manager (NPM) to install React. Let’s take a few minutes to cover the commands we’ll need to use to get around. Here are the basic commands we will need to get around and create folders/directories:

 

Open a Command Prompt in Windows

Click the Start Menu (1), start typing the word command (2), then choose either Command Prompt or the Node.js command prompt (3) — either choice will work.

A command prompt window will open with the path showing as C:\Users\<username> where the <username> on your system will be the user you are logged in as.

To execute a command, we type the command and any required options, then press Enter to execute it and see the results. Let’s walk through each of the commands listed above to see what happens:

dir

Let’s look at the contents of the downloads folder with this command:

dir downloads

The path shows we are still in the directory C:\Users\ReactUser>, however, we are looking at the contents of C:\Users\ReactUser\downloads and we see that it has one file. Let’s move to the downloads directory with this command:

cd downloads

We’ve changed to the downloads folder as the command prompt shows C:\Users\ReactUser\Downloads>. You can use the dir command to see the contents of this directory/folder. Next, let’s go back to the previous directory with this command:

cd..

Now we are back to where we started. Let’s create a new directory for our first project and name it reactproject1. We’ll use the command:

mkdir reactproject1

Again, we use the dir command to list the files within our current folder.

dir

If you want to learn more about commands, please check out these links:

 

Install React on Windows

There are two ways to install React for your projects. Let’s look at each approach so that you can decide which one you prefer to use.

 Option 1 

  • Create a project folder
  • Change to the project folder
  • Create a package.json file
  • Install React and other modules you choose

This install option allows you to full control over everything that is installed and defined as dependencies.

Step 1: To get started, we need to open a command prompt.

Step 2: Create a project folder named reactproject1:

mkdir reactproject1

Press Enter to execute the command, and we get a new directory called reactproject1. If you did this as part of the Command Prompt examples, you could skip this step as it will tell you that it already exists.

Step 3: Move to the project folder, using cd reactproject1, so we can install React into it.

cd reactproject1

At this point, you will see your prompt indicate C:\Users\ReactUser\reactproject1.

Step 4: Create a package.json file, the following command will walk you through creating a package.json file.

npm init

Step 5: Install React and other modules using npm install — save react, thiswill install React into your project and update the package.json file with dependencies.

npm install --save react

We can install additional packages using npm install — save and the name of the package we want to install. Here we are installing react-dom: npm install — save react-dom

npm install --save react-dom

 

 Option 2 

  • Install Create-React-App package to simplify the process of creating and installing React into your projects

 

 

Step 1: To get started, we need to open a command prompt and type npm install -g create-react-app. This installs the Create-React-App module which makes it very easy to create and deploy React into projects with a single command.

Note
When using create-react-app ensure you are in the desired directory/folder location as this command will create the project folder in the current path.

npm install -g create-react-appCreate-React-App is installed in the following location: C:\Users\<username>\AppData\Roaming\npm\node_modules\create-react-app\

Once Create-React-App is installed, we can use it to create a project folder and install React and dependencies automatically.

To make sure you are in the desired directory when creating a new project, you can use dir to see where you are, and cd <directory_name> or cd.. to get to the desired location.

Step 2: To create a new project and deploy React into it, we run create-react-app <project_name>. Let’s do this to create reactproject2.

create-react-app reactproject2

The entire process is automated and begins with creating a new React app folder for the project, then installs packages and dependencies. The default packages include react, react-dom, and react-scripts. The installation will take a few minutes.

Run a React Project Application

To run our new project, we need to use the command prompt to change to the project folder, then start it. The cd reactproject2  command will take us to the reactproject2 folder.

cd reactproject2

And npm start will run the project application.

The default browser will open and load the project:

To learn more about React, you may find these links helpful:

You now have your environment set for building out projects!  If you running our lightning fast servers our support team is at your fingertips for any questions you may have.

Top 4 Lessons Learned Using Ubuntu

Reading Time: 5 minutes

When choosing a server operating system, there are a number of factors and choices that must be decided. An often talked about and referenced OS, Ubuntu, is a popular choice and offers great functionality with a vibrant and helpful community. However; if you’re unfamiliar with Ubuntu and have not worked with either the server or desktop versions, you may encounter differences in common tasks and functionality from previous operating systems you’ve worked with. I’ve been a system administrator and running my own servers for a number of years, almost all of which were Ubuntu, here are the top four lessons I’ve learned while running Ubuntu on my server.

  1. Understanding the “server” vs. “desktop” model
  2. Deciphering the naming scheme/release schedule
  3. Figuring out the package manager
  4. Networking considerations on Ubuntu

 

Understanding “Server” vs. “Desktop” Model

Ubuntu comes in a number of images and released versions. One of the first choices to consider is whether you want the “desktop” or “server” release. When running an application or project on a server the choice to make is the “server” image. You may be wondering what the difference is, and it comes down to two main differences:

  • The “desktop” image has a graphical user interface (GUI) where the “server” image does not
  • The “desktop” image has default applications more suited for a user’s workstation, whereas the “server” image has default applications more suited for a web server.

It is really that simple. Currently, the majority of the differences relate to the default applications offereed when installing Ubuntu and whether or not you need a GUI. When I first started using Ubuntu, there were a number of differences between the two (such as a slightly tweaked Linux kernel on the “server” image designed to perform better on server grade hardware), but today it is a much more simplified decision process.

The choice comes down to saving time for the individual by either adding or removing applications (also called “packages”) that they do or don’t need. At Liquid Web we offer “server” images since it’s best designed for operating on a web server. You do have the choice of the Ubuntu version, which brings me to my next lesson learned:

 

Deciphering the Naming Scheme/Release Schedule

Both Ubuntu and CentOS are based on what is called an “upstream” provider, or another operating system that then CentOS/Ubuntu tweak and re-publish. For Ubuntu, this upstream provider is the OS, Debian, whereas CentOS is based on Red Hat (which is further based on Fedora). It was confusing at first for me to try and understand what the names for Ubuntu releases meant: “Vivid Vervet”, “Xenial Xerus”, “Zesty Zapus”; but that’s just a cute “name” the developers give a particular version until it is released.

The company that oversees Ubuntu, called Canonical, has a regular and set release schedule for Ubuntu. This is a distinct difference between CentOS, which relies on Red Hat, which further relies on Fedora. Every six months a release of Ubuntu is published, and every fourth release (or once every two years) a “long-term support” or LTS release is published. The numbering they use for releases is fairly straight-forward: YY.MM where YY is the two digit year and MM is the month of the release. The latest release is 19.04, meaning it was released in April of 2019.

This was tricky for me to initially navigate and created a headache since the different versions have different levels of support and updates provided to them. For some people, this is not an issue. As a developer, you may want to build your application on the latest and greatest OS that takes advantage of new technologies. Receiving support may not matter because you won’t be using the server for very long. You may also be in a position where you need to know that your OS won’t have any server-crashing bugs.

Which version do you choose? How do you know whether your OS will be stable and receive patches/updates? I ran into these issues with when accidentally creating a server that lost support just a few months later; however, I learned an easy-to-use trick to ensure that in the future I chose an LTS release:

  • LTS releases start with an even number and end in 04

Liquid Web offers a variety of Ubuntu releases within our Core-Managed and Self-Managed support tiers:

Notice that all the version of Ubuntu that we offer are the .04 releases. These are all LTS releases, ensuring that you receive the longest possible lifespan and vital security updates for your OS.

 

Ubuntu’s Package Manager

Prior to working with Ubuntu, I had the most experience with CentOS. Moving to Ubuntu meant having to learn how to install packages since the CentOS package manager, yum, was no longer available. I discovered that the general process is the same with a few minor differences.

The first difference I encountered is that Ubuntu uses the apt-get command. This is part of the Apt package manager and follows a very similar syntax to the yum command for the most basic command: installing packages. To install a package I had to use:

apt-get install $package

Where $package was the name of the package. I didn’t always know the name of the package I wanted to find though so I thought I could search much like I could with yum search $package; however there is another utility that Apt uses for searching: apt-cache search $package

If you forget,  you’ll get an error that you’ve run an illegal operation:

One similarity to yum that is super useful: the “y” flag to bypass any prompts about installing the package. This was my favorite “trick” on CentOS to speed up installing packages and improve my efficiency. Knowing that it works the same on Ubuntu was a major relief:

Of course installing packages meant I had to add the repositories I wanted. This was a significant change since I was use to the CentOS location of /etc/yum.repos.d/ for adding .repo files. With the shift in the Apt system on Ubuntu, I learned that I needed to add a .list file, pointing to the repository file in question in the /etc/apt/sources.list.d/ folder. Take Nginx for example; it was quite easy to add once I realized what had to go where:

 

From here it was as straight-forward as installing Nginx with the apt-get install nginx command:

 

There was a time when adding a repository did not immediately work for me; however, it was quickly resolved by telling Apt that it needed to refresh/update the list of repos it uses with the apt-get update command.

Note
Don’t worry, the apt-get update command does not update any packages needlessly, this simply runs through the configured repositories and ensures that your system is aware of them as well as the packages offered.

 

Networking Consideration on Ubuntu

The last major lesson I want to talk about was one learned painfully. Networking on Ubuntu, by default, is handled from the /etc/network/ directory:

There is a flat file called interfaces, and this will contain all the networking information for your host (configured IPs/gateways, DNS servers, etc…):

Notice in the above configuration that both the primary eth0 interface and the sub-interface, eth0:1, are configured in the same file (/etc/network/interfaces). This caught me in a bad spot once when I made a configuration syntax mistake, restarted networking, and suddenly lost all access to my host. That interfaces file contained all the networking components for my host, and when I restarted networking while it contained a mistake, it prevented ANY networking from coming back up. The only way to correct the issue was to console the server, correct the mistake, and restart networking again.

Though this is the default configuration for Ubuntu, there is now a way to ensure that you have the configuration setup for individual interfaces so, a simple mistake does not break all of the networking: /etc/network/interfaces.d/

To properly use this folder and have individual configuration files for each interface, you need to modify the /etc/network/interface file to tell it to look in the interfaces.d/ folder. This is a simple change of adding one line:

source /etc/network/interfaces.d/*

This will tell the main interfaces configuration file to also review /etc/network/interfaces.d/ for any additional configuration files and use those as well. This would have saved me a lot of hassle had I known about it earlier.

Hopefully, you can use my mistakes and struggles to your advantage. Ubuntu is a great operating system, is supported by a fantastic community, and is covered by “the most helpful humans in hosting” of Liquid Web. If you have any questions about Ubuntu, our support or how you might best use it in your environment feel free to contact us at support@liquidweb.com or via phone at 1-800-580-4985.

 

 

DNF (Dandified Yum) Command Examples: Install, Remove, Upgrade, and Downgrade

Reading Time: 1 minute
DNF (Dandified Yum) 101: Basic Package Manager Interaction
I. What is DNF (Dandified Yum)?
II. DNF Examples: Install, Remove, Upgrade, and Downgrade

Continue reading “DNF (Dandified Yum) Command Examples: Install, Remove, Upgrade, and Downgrade”

What is DNF (Dandified Yum)?

Reading Time: 1 minute
DNF (Dandified Yum) 101: Basic Package Manager Interaction
I. What is DNF (Dandified Yum)?
II. DNF Examples: Install, Remove, Upgrade, and Downgrade

What is DNF (Dandified Yum)?

Yum, or the Yellowdog Updater Modified, is a package manager for RPM-based distributions; DNF, sometimes referred to as Dandified Yum, is the next generation of that package manager.

Do yum commands still work with DNF?

Yes, for the most part DNF usage is very similar to yum’s. Additional information on DNF detailing the similarities, and differences, will be available in the Liquid Web Knowledge Base very soon.

When did DNF become the default package manager for Fedora?

DNF has been the default package manager for since the 22nd version of Fedora, Fedora 22. Dandified Yum was introduced in Fedora 18.

Why was yum replaced with DNF?

Yum has long been considered a poor performer. It was notorious for high memory usage, and the slowness when resolving dependencies. DNF now uses libsolv, an external dependency resolver, and hawkey for resolving dependencies, while yum used its own, internal, dependency resolver.