When you run difficult software projects, developers weigh their capabilities for the interface frameworks - Angular, React, Vue, Polymer and Ember, to name a few. As for the basic components, their choice varies from ASP.NET to Core ASP.NET, Node.js, Ruby on Rails, Django and others.
Angular and .NET technologies, such as the .NET Core, are the choice for many development teams. Due to the Microsoft and Google support behind the technology, .NET technology and Angular allow teams to build powerful web applications that — if managed properly — can serve the business needs.
Projects with complex architecture demand frequent yet smooth testing and build deployments. These are important elements to low downtime and excellent experiences for end users.
So what is the secret to build high-performing business software solutions?
When you think of Docker, you probably don’t think of .NET or Windows. There are a lot of good reasons to use Docker:
The key difference between a container and other solutions is that container-based virtualization uses the operating system kernel to run stand-alone guest instances of the container.
Containers are more lightweight than other solutions, thanks to the use of the OS kernel and not relying on the hypervisor to manage resources. Each container has its own root file system, processes, memory, devices and network interfaces.
Each application runs in its own container instance. Configuration problems can be minimized. There is no need to worry about which operating system the end user will have if you ship the entire operating system with the application.
Containers need much less resources compared to other stacks, thanks to the use of the OS kernel and not relying on the hypervisor. This allows you to run many more containers on the same machine than virtual machines.
Containers use the OS kernel, which means that the guest operating system is no longer needed. This frees up a large number of resources. There are many optimized OSes designed only for launching containers (CoreOS, Ubuntu Snappy, RancherOS, Red Hat's Atomic Host, VMware's Photon, Microsoft's Nano Server). Simplified images of containers, oriented to a certain application environment (NGINX, busybox, PostgreSQL, Rails) are developed. This means that the total application overhead in the container is minimized.
The container provides excellent portability. Since the containers do not require a guest operating system, it is easy to migrate from one server to another.
Application security is achieved by isolating each container. However, keep in mind that, depending on the container technology, there may be "holes" in the multi-tenant container.
Open a command prompt and create a folder named “Hello”. Go to the created folder and enter the following commands:
Let's briefly review these commands.
$ dotnet new console
dotnet new creates the current
Hello.csproj project file with the dependencies necessary to create the console application. This command also creates
Program.cs, a simple file that contains the entry point for the application.
The project file specifies all the data needed to restore dependencies and create a program.
OutputTypetag indicates that we are creating an executable file-in other words, a console application.
TargetFrameworktag indicates which .NET implementation is the target. In the extended processing scenario, you can specify several target platforms and build them in one operation. In this tutorial, we are assembling for .NET Core 2.0.
The program starts with the
using System command. This means "add all the
System namespace data to the scope for this file." The
System namespace contains basic constructs, such as
Then we define a namespace named
Hello. You can change this namespace to any other namespace. The
Program class is defined in this namespace using the
Main method, which takes an array of strings as an argument. This array contains a list of arguments passed when the compiled program is called. In our example, the program simply outputs "Hello World!" in the console.
$ dotnet run
dotnet run calls
dotnet build to confirm a successful build, and then
dotnet to launch the application.
The console application Hello .NET Core runs successfully locally. Now go one step further to build and run the application in Docker.
To get started, open a text editor. We are still working from the Hello directory where we created the application.
Add the following Docker instructions to the new file for Windows or Linux containers. When finished, save it in the root of the Hello directory as a Dockerfile without permission (you may need to set the file type to
All types (*. *) Or whatever).
The Dockerfile contains the Docker assembly instructions that are executed sequentially.
The first statement must be FROM. It initializes a new assembly step and sets the base image for the remaining instructions. The multi-architecture tags query Windows or Linux containers, depending on the mode of the Docker containers for Windows. The basic way for our example is 2.0-sdk from the microsoft / dotnet repository.
The WORKDIR statement specifies the working directory for all remaining RUN, CMD, ENTRYPOINT, COPY, and ADD instructions in the Dockerfile. If this directory does not exist, it is created. In this case, WORKDIR sets the application directory.
The COPY instruction copies new files or directories from the source path and adds them to the final container's file system. With this instruction, we copy the C # project file to the container.
The RUN instruction executes all commands in a new layer on top of the current image and captures the results. The resulting captured image is used in the next step in the Dockerfile. Dotnet restore is used to get the dependencies of the C # project file.
The COPY instruction copies the remaining files to the container into new layers.
We publish the application using this RUN instruction. The dotnet publish command compiles the application, reads its dependencies specified in the project file, and publishes the resulting set of files in the directory. Our application is published with the release configuration and displays the data in the default directory.
The ENTRYPOINT statement provides the ability to run the container as an executable file.
Now you have a Dockerfile that:
You wrote Dockerfile, now Docker creates the application and then launches the container.
The output of the
docker build command should be the same as the console output shown below:
As you can see from the output, the Docker subsystem uses the Dockerfile to create the container.
The output of the
docker run command should be the same as the console output shown below:
Congratulations! You just:
If you want to dockerize the Angular application, you should add the build scripts first. This is an easy process, requiring developers to add only the yarn install and ng build.
The final build.csx file will look like this:
Now you can setup a common file that holds all build scripts.
This can be
docker-compose.ci.build.yml, but for convenience, when you use docker-compose, I will use a docker-compose.override.yml file in /scripts folder.
Similar to the VS created Dockerfile in the API folder, we are about to create the Angular application Dockerfile in the Angular folder. Nginx is used here as base container.
When building an ASP.NET application, a developer usually use
microsoft/aspnetcore-build base image. However, now with Angular and dotnet-scripts you will require some extra components installed to make it function properly.
How to do that?
You should create a new build image on top of microsoft/aspnetcore-build and install yarn, dotnet-scripts, and Angular/cli.
Finishing the project, you need to build and deploy the final application to a server. All that has to deploy the API application to a server at id=100.100.100.100 is this:
The best part of containers and the approached used above is that, as soon as you set up, image starting, services setup, databases, Redis cache, RabbitMQ, Elasticsearch, identity providers, background jobs — or anything else you might need — become managed with a single command.