π White paper for Backend developers
This repository is a visual cheatsheet on the main topics in Backend-development. All material is divided into chapters that include different topics. There are three main parts to each topic:
π Available translations:
English
Π ΡΡΡΠΊΠΈΠΉ
π€ If you want to help the project, feel free to send your issues or pull requests.
π For better experiense enable dark theme.
Internet is a worldwide system that connects computer networks from around the world into a single network for storing/transferring information. The Internet was originally developed for the military. But soon it began to be implemented in universities, and then it could be used by private companies, which began to organize networks of providers that provide Internet access services to ordinary citizens. By early 2020, the number of Internet users exceeded 4.5 billion.
Your computer does not have direct access to the Internet. Instead, it has access to your local network to which other devices are connected via a wired (Ethernet) or wireless (Wi-Fi) connection. The organizer of such a network is a special minicomputer - router. This device connects you to your Internet Service Provider (ISP), which in turn is connected to other higher-level ISPs. Thus, all these interactions make up the Internet, and your messages always transit through different networks before reaching the final recipient.
Any device that is on any network.
A special computer on the network that serves requests from other computers.
There are several topologies (ways of organizing a network): Point to point, Daisy chain, Bus, Ring, Star and Mesh. The Internet itself cannot be referred to any one topology, because it is an incredibly complex system mixed with different topologies.
Domain Names are human-readable addresses of web servers available on the Internet. They consist of parts (levels) separated from each other by a dot. Each of these parts provides specific information about the domain name. For example country, service name, localization, etc.
The ICANN Corporation is the founder of the distributed domain registration system. It gives accreditations to companies that want to sell domains. In this way a competitive domain market is formed.
A domain name cannot be bought forever. It is leased for a certain period of time. It is better to buy domains from accredited registrars (you can find them in almost any country).
IP address is a unique numeric address that is used to recognize a particular device on the network.
- External and publicly accessible IP address that belongs to your ISP and is used to access the Internet by hundreds of other users.
- The IP address of your router in your ISP's local network, the same IP address from which you access the Internet.
- The IP address of your computer in the local (home) network created by the router, to which you can connect your devices. Typically, it looks like 192.168.XXX.XXX.
- The internal IP address of the computer, inaccessible from the outside and used only for communication between the running processes. It is the same for everyone - 127.0.0.1 or just localhost.
One device (computer) can run many applications that use the network. In order to correctly recognize where and which data coming over the network should be delivered (to which of the applications) a special numerical number - a port is used. That is, each running process on a computer which uses a network connection has its own personal port.
Version 4 of the IP protocol. It was developed in 1981 and limits the address space to about 4.3 billion (2^32) possible unique addresses.
Over time, the allocation of address space began to happen at a much faster rate, forcing the creation of a new version of the IP protocol to store more addresses. IPv6 is capable of issuing 2^128 (is huge number) unique addresses.
DNS (Domain Name System) is a decentralized Internet address naming system that allows you to create human-readable alphabetic names (domain names) corresponding to the numeric IP addresses used by computers.
DNS consists of many independent nodes, each of which stores only those data that fall within its area of responsibility.
A server that is located in close proximity to your Internet Service Provider. It is the server that searches for addresses by domain name, and also caches them (temporarily storing them for quick retrieval in future requests).
- A record - associates the domain name with an IPv4 address.
- AAAA record - links a domain name with an IPv6 address.
- CNAME record - redirects to another domain name.
- and others - MX record, NS record, PTR record, SOA record.
Modern web applications consist of two parts: Frontend and Backend. Thus implementing a client-server model.
The tasks of the Frontend are:
- A special markup language HTML is used to create web pages.
- CSS style language is used to style fonts, layout of content, etc.
- JavaScript programming language is used to add dynamics and interactivity.
As a rule, these tools are rarely used in their pure form, as so-called frameworks and preprocessors exist for more convenient and faster development.
These are usually different types of input forms that can be conveniently interacted with.
Tasks of the Backend:
Checking for permissions and access, all sorts of validations, etc.
A wide range of tasks can be implied here: working with databases, information processing, computation, etc. This is, so to speak, the heart of the Backend world. This is where all the important and interesting stuff happens.
Browser is a client which can be used to send requests to a server for files which can then be used to render web pages. In simple terms, a browser can be thought of as a program for viewing HTML files, which can also search for and download them from the Internet.
Query handling, page rendering, and the tabs feature (each tab has its own process to prevent the contents of one tab from affecting the contents of the other).
Allow you to change the browser's user interface, modify the contents of web pages, and modify the browser's network requests.
An indispensable tool for any web developer. It allows you to analyze all possible information related to web pages, monitor their performance, logs and, most importantly for us, track information about network requests.
The use of VPNs and Proxy is quite common in recent years. With the help of these technologies, users can get basic anonymity when surfing the web, as well as bypass various regional blockages.
A technology that allows you to become a member of a private network (similar to your local network), where requests from all participants go through a single public IP address. This allows you to blend in with the general mass of requests from other participants.
- Simple procedure for connection and use.
- Reliable traffic encryption.
- There is no guarantee of 100% anonymity, because the owner of the network knows the IP-addresses of all participants.
- VPNs are useless for dealing with multi-accounts and some programs because all accounts operating from the same VPN are easily detected and blocked.
- Free VPNs tend to be heavily loaded, resulting in unstable performance and slow download speeds.
A proxy is a special server on the network that acts as an intermediary between you and the destination server you intend to reach. When you are connected to a proxy server all your requests will be performed on behalf of that server, that is, your IP address and location will be substituted.
- The ability to use an individual IP address, which allows you to work with multi-accounts.
- Stability of the connection due to the absence of high loads.
- Connection via proxy is provided in the operating system and browser, so no additional software is required.
- There are proxy varieties that provide a high level of anonymity.
- The unreliability of free solutions, because the proxy server can see and control everything you do on the Internet.
Hosting is a special service provided by hosting providers, which allows you to rent space on a server (which is connected to the Internet around the clock), where your data and files can be stored. There are different options for hosting, where you can use not only the disk space of the server, but also the CPU power to run your network applications.
One physical server that distributes its resources to multiple tenants.
Virtual servers that emulate the operation of a separate physical server and are available for rent to the client with maximum privileges.
Renting a full physical server with full access to all resources. As a rule, this is the most expensive service.
A service that uses the resources of several servers. When renting, the user pays only for the actual resources used.
A service that gives the customer the opportunity to install their equipment on the provider's premises.
β | Level | Used protocols |
---|---|---|
7 | Application layer | HTTP, DNS, FTP, POP3 |
6 | Presentation layer | SSL, SSH, IMAP, JPEG |
5 | Session layer | APIs Sockets |
4 | Transport layer | TCP, UDP |
3 | Network layer | IP, ICMP, IGMP |
2 | Data link layer | Ethernet, MAC, HDLC |
1 | Physical layer | RS-232, RJ45, DSL |
OSI (The Open Systems Interconnection model) is a set of rules describing how different devices should interact with each other on the network. The model is divided into 7 layers, each of which is responsible for a specific function. All this is to ensure that the process of information exchange in the network follows the same pattern and all devices, whether it is a smart fridge or a smartphone, can understand each other without any problems.
At this level, bits (ones/zeros) are encoded into physical signals (current, light, radio waves) and transmitted further by wire (Ethernet) or wirelessly (Wi-Fi).
Physical signals from layer 1 are decoded back into ones and zeros, errors and defects are corrected, and the sender and receiver MAC addresses are extracted.
This is where traffic routing, DNS queries and IP packet generation take place.
The layer responsible for data transfer. There are two important protocols:
- TCP is a protocol that ensures reliable data transmission. TCP guarantees data delivery and preserves the order of the messages. This has an impact on the transmission speed. This protocol is used where data loss is unacceptable, such as when sending mail or loading web pages.
- UDP is a simple protocol with fast data transfer. It does not use mechanisms to guarantee the delivery and ordering of data. It is used e.g. in online games where partial packet loss is not crucial, but the speed of data transfer is much more important. Also, requests to DNS servers are made through UDP protocol.
Responsible for opening and closing communications (sessions) between two devices. Ensures that the session stays open long enough to transfer all necessary data, and then closes quickly to avoid wasting resources.
Transmission, encryption/decryption and data compression. This is where data that comes in the form of zeros and ones are converted into desired formats (PNG, MP3, PDF, etc.)
Allows the user's applications to access network services such as database query handler, file access, email forwarding.
HTTP (HyperText Transport Protocol) is the most important protocol on the Internet. It is used to transfer data of any format. The protocol itself works according to a simple principle: request -> response.
HTTP messages consist of a header section containing metadata about the message, followed by an optional message body containing the data being sent.
Additional service information that is sent with the request/response.
Common headers: Host, User-Agent, If-Modified-Since, Cookie, Referer, Authorization, Cache-Control, Content-Type, Content-Length, Last-Modified, Set-Cookie, Content-Encoding.
Main: GET, POST, PUT, DELETE.
Others: HEAD, CONNECT, OPTIONS, TRACE, PATCH.
Each response from the server has a special numeric code that characterizes the state of the sent request. These codes are divided into 5 main classes:
- 1Ρ Ρ - Service information
- 2Ρ Ρ - Successful request
- 3Ρ Ρ - Redirect to another address
- 4Ρ Ρ - Client side error
- 5Ρ Ρ - Server side error
Same HTTP, but with encryption support. Your apps should use HTTPS to be secure.
The HTTP protocol does not provide the ability to save information about the status of previous requests and responses. Cookies are used to solve this problem. Cookies allow the server to store information on the client side that the client can send back to the server. For example, cookies can be used to authenticate users or to store various settings.
A technology that allows one domain to securely receive data from another domain.
A special header that allows you to recognize and eliminate certain types of web application vulnerabilities.
- HTTP 1.0: Uses separate connections for each request/response, lacks caching support, and has plain text headers.
- HTTP 1.1: Introduces persistent connections, pipelining, the Host header, and chunked transfer encoding.
- HTTP 2: Supports multiplexing, header compression, server push, and support a binary data.
- HTTP 3: Built on QUIC, offers improved multiplexing, reliability, and better performance over unreliable networks.
Compared to the OSI model, the TCP/IP stack has a simpler architecture. In general, the TCP/IP model is more widely used and practical, and the OSI model is more theoretical and detailed. Both models describe the same principles, but differ in the approach and protocols they include at their levels.
Defines how data is transmitted over the physical medium, such as cables or wireless signals.
Protocols: Ethernet, Wi-Fi, Bluetooth, Fiber optic.
Routing data across different networks. It uses IP addresses to identify devices and routes data packets to their destination.
Protocols: IP, ARP, ICMP, IGMP
Data transmission between two devices. It uses protocols such as TCP - reliable, but slow and UDP - fast, but unreliable.
Provides services to the end user, such as web browsing, email, and file transfer. It interacts with the lower layers of the stack to transmit data over the network.
Protocols: HTTP, FTP, SMTP, DNS, SNMP.
The quality of networks, including the Internet, is far from ideal. This is due to the complex structure of networks and their dependence on a huge number of factors. For example, the stability of the connection between the client device and its router, the quality of service of the provider, the power and performance of the server, the physical distance between the client and the server, etc.
The time it takes for a data packet to travel from sender to receiver. It depends more on the physical distance.
Not all packets traveling over the network can reach their destination. This happens most often when using wireless networks or due to network congestion.
The time it takes for the data packet to reach its destination + the time to respond that the packet was received successfully.
Delay fluctuations, unstable ping (for example, 50ms, 120ms, 35ms...).
The IP protocol does not guarantee that packets are delivered in the order in which they are sent.
A procedure that allows you to trace to which nodes, with which IP addresses, a packet you send before it reaches its destination. Tracing can be used to identify computer network related problems and to examine/analyze the network.
The easiest way to check the server for performance.
Due to dropped connections, not all packets sent over the network reach their destination.
A powerful program with a graphical interface for analyzing all traffic that passes through the network in real time.
The most important PC component to which all other elements are connected.
- Chipset - set of electronic components that responsible for the communication of all motherboard components.
- CPU socket - socket for mounting the processor.
- VRM (Voltage Regulator Module) β module that converts the incoming voltage (usually 12V) to a lower voltage to run the processor, integrated graphics, memory, etc.
- Slots for RAM.
- Expansion slots PCI-Express - designed for connection of video cards, external network/sound cards.
- Slots M.2 / SATA - designed to connect hard disks and SSDs.
The most important device that executes instructions (programme code). Processors only work with 1 and 0, so all programmes are ultimately a set of binary code.
- Registers - the fastest memory in a PC, has an extremely small capacity, is built into the processor and is designed to temporarily store the data being processed.
- Cache - slightly less fast memory, which is also built into the processor and is used to store a copy of data from frequently used cells in the main memory.
- Processors can have different architectures. Currently, the most common are the x86 architecture (desktop and laptop computers) and ARM (mobile devices as well as the latest Apple computers).
Fast, low capacity memory (4-16GB) designed to temporarily store program code, as well as input, output and intermediate data processed by the processor.
Large capacity memory (256GB-1TB) designed for long-term storage of files and installed programmes.
A separate card that translates and processes data into images for display on a monitor. This device is also called a discrete graphics card. Usually needed for those who do 3D modelling or play games.
Built-in graphics card is a graphics card built into the processor. It is suitable for daily work.
A device that receives and transmits data from other devices connected to the local network.
A device that allows you to process sound, output it to other devices, record it with a microphone, etc.
A device designed to convert the AC voltage from the mains to DC voltage.
Operating system (OS) is a comprehensive software system designed to manage a computer's resources. With operating systems, people do not have to deal directly with the processor, RAM or other parts of the PC.
OS can be thought of as an abstraction layer that manages the hardware of a computer, thereby providing a simple and convenient environment for user software to run.
- RAM management (space allocation for individual programms)
- Loading programms into RAM and their execution
- Execution of requests from user's programms (inputting and outputting data, starting and stopping other programms, freeing up memory or allocating additional memory, etc.)
- Interaction with input and output devices (mouse, keyboard, monitor, etc.)
- Interaction with storage media (HDDs and SSDs)
- Providing a user's interface (console shell or graphical interface)
- Logging of software errors (saving logs)
- Organise multitasking (simultaneous execution of several programms)
- Delimiting access to resources for each process
- Inter-process communication (data exchange, synchronisation)
- Organise the protection of the operating system itself against other programms and the actions of the user
- Provide multi-user mode and differentiate rights between different OS users (admins, guests, etc.)
The central part of the operating system which is used most intensively. The kernel is constantly in memory, while other parts of the OS are loaded into and unloaded from memory as needed.
The system software that prepares the environment for the OS to run (puts the hardware in the right state, prepares the memory, loads the OS kernel there and transfers control to it (the kernel).
Special software that allows the OS to work with a particular piece of equipment.
A kind of container in which all the resources needed to run a program are stored. As a rule, the process consists of:
- Executable program code
- Input and output data
- Call stack (order of instructions for execution)
- Heap (a structure for storing intermediate data created during the process)
- Segment descriptor
- File descriptor
- Information about the set of permissible powers
- Processor status information
An entity in which sequences of program actions (procedures) are executed. Threads are within a process and use the same address space. There can be multiple threads in a single process, allowing multiple tasks to be performed. These tasks, thanks to threads, can exchange data, use shared data or the results of other tasks.
The ability to perform multiple tasks simultaneously using multiple processor cores, where each individual core performs a different task.
The ability to perform multiple tasks, but using a single processor core. This is achieved by dividing tasks into separate blocks of commands which are executed in turn, but switching between these blocks is so fast that for users it seems as if these processes are running simultaneously.
A mechanism which allows to exchange data between threads of one or different processes. Processes can be run on the same computer or on different computers connected by a network. Inter-process communication can be done in different ways.
The easiest way to exchange data. One process writes data to a certain file, another process reads the same file and thus receives data from the first process.
Asynchronous notification of one process about an event which occurred in another process.
In particular, IP addresses and ports are used to communicate between computers using the TCP/IP protocol stack. This pair defines a socket (socket corresponding to the address and port).
A counter over which only 2 operations can be performed: increasing and decreasing (and for 0 the decreasing operation is blocked).
Redirecting the output of one process to the input of another (similar to a pipe).
Operating systems based on Linux kernel are the standard in the world of server development, since most servers run on such operating systems. Using Linux on servers is profitable because it is free and open source, secure and works fast on cheap hardware.
There are a huge number of Linux distributions (preinstalled software bundles) to suit all tastes. One of the most popular is Ubuntu. This is where you can start your dive into server development.
Install Ubuntu on a separate PC or laptop. If this is not possible, you can use a special program Virtual Box where you can run other OS on top of the main OS. You can also run Docker Ubuntu image container (Docker is a separate topic that is exists in this repository).
Shell (or console, terminal) is a computer program which is used to operate and control a computer by entering special text commands. Generally, servers do not have graphical interfaces (GUI), so you will definitely need to learn how to work with shells. The are many Unix shells, but most Linux distributions come with a Bash shell by default.
ls # list directory contents
cd [PATH] # go to specified directory
cd .. # move to a higher level (to the parent directory)
touch [FILE] # create a file
cat > [FILE] # enter text into the file (overwrite)
cat >> [FILE] # enter text at the end of the file (append)
cat/more/less [FILE] # to view the file contents
head/tail [FILE] # view the first/last lines of a file
pwd # print path to current directory
mkdir [NAME] # create a directory
rmdir [NAME] # delete a directory
cp [FILE] [PATH] # copy a file or directory
mv [FILE] [PATH] # moving or renaming
rm [FILE] # deleting a file or directory
find [STRING] # file system search
du [FILE] # output file or directory size
grep [PATTERN] [FILE] # print lines that match patterns
man [COMMAND] # allows you to view a manual for any command
apropos [STRING] # search for a command with a description that has a specified word
man -k [STRING] # similar to the command above
whatis [COMMAND] # a brief description of the command
Analogue to running as administrator in Windows
sudo [COMMAND] # executes a command with superuser privileges
Study any in order to read and edit files freely through the terminal. The easiest β nano. Something in the middle - micro. The most advanced β Vim.
The package manager is a utility that allows you to install/update software packages from the terminal.
Linux distributions can be divided into several groups, depending on which package manager they use: apt (in Debian based distributions), RPM (the Red Hat package management system) and Pacman (the package manager in Arch-like distributions)
Ubuntu is based on Debian, so it uses apt (advanced packaging tool) package manager.
apt install [package] # install the package
apt remove [package] # remove the package, but keep the configuration
apt purge [package] # remove the package along with the configuration
apt update # update information about new versions of packages
apt upgrade # update the packages installed in the system
apt list --installed # list of packages installed on the system
apt list --upgradable # list of packages that need to be updated
apt search [package] # searching for packages by name on the network
apt show [package] # package information
Interactive console utility for easy viewing of packages to install, update and uninstall them.
Package managers typically work with software repositories. These repositories contain a collection of software packages that are maintained and provided by the distribution's community or official sources.
add-apt-repository [repository_url] # add a new repository
add-apt-repository --remove [repository_url] # remove a repo
# don\'t forget to update after this operations - apt update
/etc/apt/sources.list # a file contains a list of configured repo links
/etc/apt/sources.list.d # a directory contains files for thrid party repos
Low-level tool to install, build, remove and manage Debian packages.
You can use scripts to automate the sequential input of any number of commands. In Bash you can create different conditions (branching), loops, timers, etc. to perform all kinds of actions related to shell input.
The most basic and frequently used features such as: variables, I/O, loops, conditions, etc.
Solve challenges on sites like HackerRank and Codewars. Start using Bash to automate routine activities on your computer. If you're already a programmer, create scripts to easily build your project, to install settings, and so on.
It will point out possible mistakes and teach you best practices for writing really good scripts.
Repositories such as awesome bash and awesome shell have entire collections of useful resources and tools to help you develop even more skills with Bash and shell in general.
Linux-based operating systems are multi-user. This means that several people can run many different applications at the same time on the same computer. For the Linux system to be able to "recognize" a user, he must be logged in and therefore each user must have a unique name and a secret password.
useradd [name] [flags] # create a new user
passwd [name] # set a password for the user
usermod [name] [flags] # edit a user
usermod -L [name] # block a user
usermod -U [name] # unblock a user
userdel [name] [flags] # delete a user
su [name] # switch to other user
groupadd [group] [flags] # create a group
groupmod [group] [flags] # edit group
groupdel [group] [flags] # delete group
usermod -a -G [groups] [user] # add a user to groups
gpasswd --delete [user] [groups] # remove a user from groups
/etc/passwd # a file containing basic information about users
/etc/shadow # a file containing encrypted passwords
/etc/group # a file containing basic information about groups
/etc/gshadow # a file containing encrypted group passwords
In Linux, it is possible to share privileges between users, limit access to unwanted files or features, control available actions for services, and much more. In Linux, there are only three kinds of rights - read, write and execute - and three categories of users to which they can be applied - file owner, file group and everyone else.
chown <user> <file> # changes the owner and/or group for the specified files
chmod <rights> <file> # changes access rights to files and directories
chgrp <group> <file> # allows users to change groups
An advanced subsystem for managing access rights.
Linux processes can be described as containers in which all information about the state of a running program is stored. Sometimes programs can hang and in order to force them to close or restart, you need to be able to manage processes.
ps # display a snapshot of the processes of all users
top # real-time task manager
[command] & # running the process in the background, (without occupying the shell)
jobs # list of processes running in the background
fg [PID] # return the process back to the active mode by its number
# You can press [Ctrl+Z] to return the process to the background
bg [PID] # start a stopped process in the background
kill [PID] # terminate the process by PID
killall [program] # terminate all processes related to the program
SSH allows remote access to another computer's terminal. In the case of a personal computer, this may be needed to solve an urgent problem, and in the case of working with the server, remote access via SSH is an integral and regularly used practice.
apt install openssh-server # installing SSH (out of the box almost everywhere)
service ssh start # start SSH
service ssh stop # stop SSH
ssh -p [port] [user]@[remote_host] # connecting to a remote machine via SSH
ssh-keygen -t rsa # RSA key generation for passwordless login
ssh-copy-id -i ~/.ssh/id_rsa [user]@[remote_host] # copying a key to a remote machine
/etc/ssh/sshd_config # ssh server global config
~/.ssh/config # ssh server local config
~/.ssh/authorized_keys # file with saved public keys
For Linux there are many built-in and third-party utilities to help you configure your network, analyze it and fix possible problems.
ip address # show info about IPv4 and IPv6 addresses of your devices
ip monitor # real time monitor the state of devices
ifconfig # config the network adapter and IP protocol settings
traceroute <host> # show the route taken by packets to reach the host
tracepath <host> # traces the network host to destination discovering MTU
ping <host> # check connectivity to host
ss -at # show the list of all listening TCP connections
dig <host> # show info about the DNS name server
host <host | ip-address> # show the IP address of a specified domain
mtr <host | ip-address> # combination of ping and traceroute utilities
nslookup # query Internet name servers interactively
whois <host> # show info about domain registration
ifplugstatus # detect the link status of a local Linux ethernet device
iftop # show bandwidth usage
ethtool <device name> # show detalis about your ethernet device
nmap # tool to explore and audit network security
bmon # bandwidth monitor and rate estimator
firewalld # add, configure and remove rules on firewall
ipref # perform network performance measurement and tuning
speedtest-cli # check your network download/upload speed
wget <link> # download files from the Internet
tcpdump
A console utility that allows you to intercept and analyze all network traffic passing through your computer.
netcat
Utility for reading from and writing to network connections using TCP or UDP. It includes port scanning, transferring files, and port listening: as with any server, it can be used as a backdoor.
iptables
User-space utility program that allows configure the IP packet filter rules of the Linux kernel firewall, implemented as different Netfilter modules. The filters are organized in different tables, which contain chains of rules for how to treat network traffic packets.
curl
Command-line tool for transferring data using various network protocols.
Schedulers allow you to flexibly manage the delayed running of commands and scripts. Linux has a built-in cron scheduler that can be used to easily perform necessary actions at certain intervals.
crontab -e # edit the crontab file of the current user
crontab -l # output the contents of the current schedule file
crontab -r # deleting the current schedule file
/etc/crontab # base config
/etc/cron.d/ # a dir with crontab files used to manage the entire system
# dirs where you can store scripts that runs:
/etc/cron.daily/ # every day
/etc/cron.weekly/ # every week
/etc/cron.monthly/ # every month
Log files are special text files that contain all information about the operation of a computer, program, or user. They are especially useful when bugs and errors occur in the operation of a program or server. It is recommended to periodically review log files, even if nothing suspicious happens.
/var/log/syslog or /var/log/messages # information about the kernel,
# various services detected, devices, network interfaces, etc.
/var/log/auth.log or /var/log/secure # user authorization information
/var/log/faillog # failed login attempts
/var/log/dmesg # information about device drivers
/var/log/boot.log # operating system boot information
/var/log/cron # cron task scheduler report
Designed for easy viewing of log files (highlighting, reading different formats, searching, etc.)
Allows you to configure automatic deletion (cleaning) of log files so as not to clog memory.
Collects data from all available sources and stores it in binary format for convenient and dynamic control
- Unmet dependencies - occurs when package fails to install or update.
- Dependency errors and conflicts
All free Linux drivers are built right into its kernel. Therefore, everything should work "out of the box" after installing the system (problems may occur with brand new hardware which has just been released on the market). Drivers whose source code is closed are considered proprietary and are not included in the kernel but are installed manually (like Nvidia graphics drivers).
- Check disk space availability using the
df
command and ensure that critical partitions are not full.- Use the
fsck
command to check and repair file system inconsistencies.- In case of data loss or accidental deletion, utilize data recovery tools like
extundelete
ortestdisk
.
- Check system resource usage, including CPU, memory, and disk space, using
free
,df
, ordu
commands.- Identify resource-intensive processes using tools like
top
,htop
, orsystemd-cgtop
.- Disable unnecessary startup services or background processes to improve performance.
- Use the ping command to check network connectivity to a specific host or IP address.
- Check the network settings, such as IP configuration, DNS settings, and firewall rules.
Kernel panic - can occur due to an error when mounting the root file system. This is best helped by the skill of reading the logs to find problems (
dmesg
command).
Numeral system is a set of symbols and rules for denoting numbers. In computer science, it is customary to distinguish four main number systems: binary, octal, decimal, and hexadecimal. It is connected, first of all, with their use in various branches of programming.
The most important system for computing technology. Its use is justified by the fact that the logic of the processor is based on only two states (on/off, open/closed, high/low, true/false, yes/no, high/low).
It is used e.g. in Linux systems to grant access rights.
A system that is easy to understand for most people.
The letters A, B, C, D, E, F are additionally used for recording. It is widely used in low-level programming and computer documentation because the minimum addressable memory unit is an 8-bit byte, the values of which are conveniently written in two hexadecimal digits.
You can try online converter for a better understanding.
Logical connective are widely used in programming to handle boolean types (true/false or 1/0). The result of a boolean expression is also a value of a boolean type.
AND
|
OR
|
XOR
|
They are the basis of other all kinds of operations.
There are three in total: Operation AND (&&, Conjunction), operation OR (||, Disjunction), operation NOT (!, Negation).
An important operation that is fundamental to coding theory and computer networks.
For logical operations, there are special tables that describe the input data and the return result.
The
NOT
operator has the highest priority, followed by theAND
operator, and then theOR
operator. You can change this behavior using round brackets.
Data structures are containers in which data is stored according to certain rules. Depending on these rules, the data structure will be effective in some tasks and ineffective in others. Therefore, it is necessary to understand when and where to use this or that structure.
A data structure that allows you to store data of the same type, where each element is assigned a different sequence number.
A data structure where all elements, in addition to the data, contain references to the next and/or previous element. There are 3 varieties:
- A singly linked list is a list where each element stores a link to the next element only (one direction).
- A doubly linked list is a list where the items contain links to both the next item and the previous one (two directions).
- A circular linked list is a kind of bilaterally linked list, where the last element of the ring list contains a pointer to the first and the first to the last.
Structure where data storage works on the principle of last in - first out (LIFO).
Structure where data storage is based on the principle of first in - first out (FIFO).
In other words, it is an associative array. Here, each of the elements is accessed with a corresponding key value, which is calculated using hash function according to a certain algorithm.
Structure with a hierarchical model, as a set of related elements, usually not ordered in any way.
Similar to the tree, but in the heap, the items with the largest key is the root node (max-heap). But it may be the other way around, then it is a min heap.
A structure that is designed to work with a large number of links.
Algorithms refer to sets of sequential instructions (steps) that lead to the solution of a given problem. Throughout human history, a huge number of algorithms have been invented to solve certain problems in the most efficient way. Accordingly, the correct choice of algorithms in programming will allow you to create the fastest and most resource-intensive solutions.
There is a very good book about algorithms for beginners β Grokking algorithms. You can start learning a programming language in parallel with reading it.
Maximum efficient search algorithm for sorted lists.
At each step of the algorithm, the minimum element is searched for and then swapped with the current iteration element.
When a function can call itself and so on to infinity. On the one hand, recursion-based solutions look very elegant, but on the other hand, this approach quickly leads to stack overflow and is recommended to be avoided.
At each iteration neighboring elements are sequentially compared, and if the order of the pair is wrong, the elements are swapped.
Improved bubble sorting method.
Allows to find all shortest paths from a given vertex of the graph.
Finds the shortest paths between all vertices of a graph and their length.
An algorithm that at each step makes locally the best choice in the hope that the final solution will be optimal.
In the world of programming there is a special unit of measure Big O notation. It describes how the complexity of an algorithm increases with the amount of input data. Big O estimates how many actions (steps/iterations) it takes to execute the algorithm, while always showing the worst case scenario.
- Constant O(1) β the fastest.
- Linear O(n)
- Logarithmic O(log n)
- Linearimetric O(n * log n)
- Quadratic O(n^2)
- Stepwise O(2^n)
- Factorial O(n!) β the slowest.
When you know in advance on which machine the algorithm will be executed, you can measure the execution time of the algorithm. Again, on very good hardware the execution time of the algorithm can be quite acceptable, but the same algorithm on a weaker hardware can run for hundreds of milliseconds or even a few seconds. Such delays will be very sensitive if your application handles user requests over the network.
In addition to time, you need to consider how much memory is spent on the work of an algorithm. It is important when you working with limited memory resources.
Different file formats can be used to store and transfer data over the network. Text files are human-readable, so they are used for configuration files, for example. But transferring data in text formats over the network is not always rational, because they weigh more than their corresponding binary files.
Text formats
Represents an object in which data is stored as key-value pairs.
The format is closer to HTML. Here the data is wrapped in opening and closing tags.
The format is close to markup languages like HTML. Minimalist, because it has no opening or closing tags. Easy to edit.
A minimal configuration file format that's easy to read due to obvious semantics. TOML is designed to map unambiguously to a hash table. TOML should be easy to parse into data structures in a wide variety of languages.
Binary formats
Binary analog of JSON. Allows you to pack data 15-20% more efficiently.
It is a superset of JSON, including additionally regular expressions, binary data and dates.
Binary alternative to XML text format. Simpler, more compact and faster.
Image formats
It is best suited for photographs and complex images with a wide range of colors. JPEG images can achieve high compression ratios while maintaining good image quality, but repeated editing and saving can result in loss of image fidelity.
It is a lossless compression format that supports transparency. It is commonly used for images with sharp edges, logos, icons, and images that require transparency. PNG images can have a higher file size compared to JPEG, but they retain excellent quality without degradation during repeated saves.
Used for simple animations and low-resolution images with limited colors. It supports transparency and can be animated by displaying a sequence of frames.
XML-based vector image format defined by mathematical equations rather than pixels. SVG images can be scaled to any size without losing quality and are well-suited for logos, icons, and graphical elements.
Modern image format developed by Google. It supports both lossy and lossless compression, providing good image quality with smaller file sizes compared to JPEG and PNG. WebP images are optimized for web use and can include transparency and animation.
Video formats
Widely used video format that supports high-quality video compression, making it suitable for streaming and storing videos. MP4 files can contain both video and audio.
Is a multimedia container format developed by Microsoft. It can store audio and video data in a single file, allowing for synchronized playback. However, they tend to have larger file sizes compared to more modern formats.
Is a video format developed by Apple for use with their QuickTime media player. It is widely used with Mac and iOS devices. MOV files can contain both video and audio, and they offer good compression and quality, making them suitable for editing and professional use.
Best for videos embedded on your personal or business website. It is lightweight, load quickly and stream easily.
Audio formats
The most popular audio format known for its high compression and small file sizes. It achieves this by removing some of the audio data that may be less perceptible to the human ear. Suitable for music storage, streaming, and sharing.
Is an uncompressed audio format that stores audio data in a lossless manner, resulting in high-quality sound reproduction. WAV files are commonly used in professional audio production and editing due to their accuracy and fidelity. However, they tend to have larger file sizes compared to compressed formats.
Is a widely used audio format known for its efficient compression and good sound quality. It offers better sound reproduction at lower bit rates compared to MP3. AAC files are commonly used for streaming music, online radio, and mobile devices, as they deliver good audio quality while conserving bandwidth and storage.
Computers work only with numbers, or more precisely, only with 0 and 1. It is already clear how to convert numbers from different number systems to binary. But you can't do that with text. That's why special tables called encodings were invented, in which text characters are assigned numeric equivalents.
The simplest encoding created specifically for the American alphabet. Consists of 128 characters.
This is an international character table that, in addition to the English alphabet, contains the alphabets of almost all countries. It can hold more than a million different characters (the table is currently incomplete).
UTF-8 is a variable-length encoding that can be used to represent any unicode character.
Its main difference from UTF-8 is that its structural unit is not one but two bytes. That is, in UTF-16 any Unicode character can be encoded by either two or four bytes.
At this stage you have to choose one programming language to study. There is plenty of information on various languages in the Internet (books, courses, thematic sites, etc.), so you should have no problem finding information.
Below is a list of specific languages that personally, in my opinion are good for backend development (β οΈ may not agree with the opinions of others, including those more competent in this matter).
A very popular language with a wide range of applications. Easy to learn due to its simple syntax.
No less popular and practically the only language for full-fledged Web-development. Thanks to the platform Node.js last few years is gaining popularity in the field of backend development as well.
A language created internally by Google. It was created specifically for high-load server development. Minimalistic syntax, high performance and rich standard library.
A kind of modern version of Java. Simpler and more concise syntax, better type-safety, built-in tools for multithreading. One of the best choices for Android development.
Find a good book or online tutorial in English at this repository. There is a large collection for different languages and frameworks.
Look for a special awesome repository - a resource that contains a huge number of useful links to materials for your language (libraries, cheat sheets, blogs and other various resources).
There are many programming languages. They are all created for a reason. Some languages may be very specific and used only for certain purposes. Also, different languages may use different approaches to writing programs. They may even run differently on a computer. In general, there are many different classifications, which would be useful to understand.
As close to machine code, complex to write, but as productive as possible. As a rule, it provides access to all of the computer's resources.
They have a fairly high level of abstraction, which makes them easy to write and easy to use. As a rule, they are safer because they do not provide access to all of the computer's resources.
Allows you to convert the source code of a program to an executable file.
The source code of a program is translated and immediately executed (interpreted) by a special interpreter program.
In this approach, the program is not compiled into a machine code, but into machine-independent low-level code - bytecode. This bytecode is then executed by the virtual machine itself.
Focuses on describing the steps to solve a problem through a sequence of statements or commands.
Focuses on describing what the program should do, rather than how it should do it. Examples of declarative languages include SQL and HTML.
Based on the idea of treating computation as the evaluation of mathematical functions. It emphasizes immutability, avoiding side effects, and using higher-order functions. Examples of functional languages include Haskell, Lisp, and Clojure.
Revolves around creating objects that contain both data and behavior, with the goal of modeling real-world concepts. Examples of object-oriented languages include Java, Python, and C++.
Focused on handling multiple tasks or threads at the same time, and is used in systems that require high performance and responsiveness. Examples of concurrent languages include Go and Erlang.
By foundations are meant some fundamental ideas present in every language.
Are names assigned to a memory location in the program to store some data.
Define the type of data that can be stored in a variable. The main data types are integers, floating point numbers, symbols, strings, and boolean.
Used to perform operations on variables or values. Common operators include arithmetic operators, comparison operators, logical operators, and assignment operators.
Loops, conditions
if else
,switch case
statements.
Are blocks of code that can be called multiple times in a program. They allow for code reusability and modularization. Functions are an important concept for understanding the scope of variables.
Special containers in which data are stored according to certain rules. Main data structures are arrays, maps, trees, graphs.
This refers to the language's built-in features for manipulating data structures, working with the file system, network, cryptography, etc.
Used to handle unexpected events that can occur during program execution.
A powerful tool for working with strings. Be sure to familiarize yourself with it in your language, at least on a basic level.
Writing the code of the whole program in one file is not at all convenient. It is much more readable to break it up into smaller modules and import them into the right places.
Sooner or later, there will be a desire to use third-party libraries.
After mastering the minimal base for writing the simplest programs, there is not much point in continuing to learn without having specific goals (without practice, everything will be forgotten). You need to think of/find something that you would like to create yourself (a game, a chatbot, a website, a mobile/desktop application, whatever). For inspiration, check out these repositories: Build your own x and Project based learning.
At this point, the most productive part of learning begins: You just look for all kinds of information to implement your project. Your best friends are Google, YouTube, and Stack Overflow.
OOP is one of the most successful and convenient approaches for modeling real-world things. This approach combines several very important principles which allow to write modular, extensible and loosely coupled code.
A class can be understood as a custom data type (a kind of template) in which you describe the structure of future objects that will implement the class. Classes can contain
properties
(these are specific fields in which data of a particular data type can be stored) andmethods
(these are functions that have access to properties and the ability to manipulate, modify them).
An object is a specific implementation of a class. If, for example, the name property with type string is described in a class, the object will have a specific value for that field, for example "Alex".
Ability to create new classes that inherit properties and methods of their parents. This allows you to reuse code and create a hierarchy of classes.
Ability to hide certain properties/methods from external access, leaving only a simplified interface for interacting with the object.
The ability to implement the same method differently in descendant classes.
Often the principle of
inheritance
can complicate and confuse your program if you do not think carefully about how to build the future hierarchy. That is why there is an alternative (more flexible) approach called composition. In particular, Go language lacks classes and many OOP principles, but widely uses composition.
Dependency injection is a popular OOP pattern that allows objects to receive their dependencies (other objects) from the outside rather than creating them internally. It promotes loose coupling between classes, making code more modular, maintainable, and easier to test.
A socket is an endpoint of a two-way communication link between two programs running over a network. You need to know how to create, connect, send, and receive data over sockets.
These protocols are the most important, you need to understand the intricacies of working with each of them.
You need to know how to hosting HTML pages, pictures, PDF documents, music/video files, etc.
Creation of endpoints (URLs) which will call the appropriate handler on the server when accessed.
As a rule, HTTP handlers have a special object which receives all information about user request (headers, method, request body, query parameters and so on)
Sending an appropriate message to a received request (HTTP status and code, response body, headers, etc.)
You should always be prepared for the possibility that something will go wrong: the user will send incorrect data, the database will not perform the operation, or an unexpected error will simply occur in the application. It is necessary for the server not to crash, but to send a response with information about the error.
An intermediate component between the application and the server. It used for handling authentication, validation, caching data, logging requests, and so on.
Often, within one application, you will need to access another application over the network. That's why it's important to be able to send HTTP requests using the built-in features of the language.
Is a special module that uses a more convenient syntax to generate HTML based on dynamic data.
Asynchronous programming is an efficient way to write programs with a large number of I/O (input/output) operations. Such operations may include reading files, requesting to a database or remote server, reading user input, and so on. In these cases, the program spends a lot of time waiting for external resources to respond, and asynchronous programming allows the program to perform other tasks while waiting for the response.
This is function that is passed as an argument to another function and is intended to be called by that function at a later time. The purpose of a callback is to allow the calling function to continue executing while the called function performs a time-consuming or asynchronous task. Once the task is complete, the called function will invoke the callback function, passing it any necessary data as arguments.
A popular approach to writing asynchronous programs. The logic of the program is to wait for certain events and process them as they arrive. This can be useful in web applications that need to handle a large number of concurrent connections, such as chat applications or real-time games.
- In Python, asynchronous programming can be done using the asyncio module, which provides an event loop and coroutine-based API for concurrency. There are also other third-party libraries like Twisted and Tornado that provide asynchronous capabilities.
- In JavaScript, asynchronous programming is commonly achieved through the use of promises, callbacks, async/await syntax and the event loop.
- Go has built-in support for concurrency through goroutines and channels, which allow developers to write asynchronous code that can communicate and synchronize across multiple threads.
- Kotlin provides coroutines are similar to JavaScript's async/await and Python's asyncio, and can be used with a variety of platforms and frameworks.
Computers today have processors with several physical and virtual cores, and if we take into account server machines, their number can reach up to hundreds. All of these available resources would be good to use to the fullest, for maximum application performance. That is why modern server development cannot do without implementing multitasking and paralleling.
Multitasking refers to the concurrent execution of multiple threads of control within a single program. A thread is a lightweight process that runs within the context of a process, and has its own stack, program counter, and register set. Multiple threads can share the resources of a single process, such as memory, files, and I/O devices. Each thread executes independently and can perform a different task or part of a task.
- Cooperative multitasking: each program or task voluntarily gives up control of the CPU to allow other programs or tasks to run. Each program or task is responsible for yielding control to other programs or tasks at appropriate times. This approach requires programs or tasks to be well-behaved and to avoid monopolizing the CPU. If a program or task does not yield control voluntarily, it can cause the entire system to become unresponsive. Cooperative multitasking was commonly used in early operating systems and is still used in some embedded systems or real-time operating systems.
- Preemptive multitasking: operating system forcibly interrupts programs or tasks at regular intervals to allow other programs or tasks to run. The operating system is responsible for managing the CPU and ensuring that each program or task gets a fair share of CPU time. This approach is more robust than cooperative multitasking and can handle poorly behaved programs or tasks that do not yield control. Preemptive multitasking is used in modern operating systems, such as Windows, macOS, Linux, and Android.
- Race conditions: When multiple threads access and modify shared data concurrently, race conditions can occur, resulting in unpredictable behavior or incorrect results.
- Deadlocks: Occur when two or more threads are blocked waiting for resources that are held by other threads, resulting in a deadlock.
- Debugging: Multitasking programs can be difficult to debug due to their complexity and non-deterministic behavior. You need to use advanced debugging tools and techniques, such as thread dumps, profilers, and logging, to diagnose and fix issues.
Needed to securely exchange data between different threads.
- Semaphore: It is essentially a counter that keeps track of the number of available resources and can block threads or processes that try to acquire more than the available resources.
- Mutex: (short for mutual exclusion) allows only one thread or process to access the resource at a time, ensuring that there are no conflicts or race conditions.
- Atomic operations: operations that are executed as a single, indivisible unit, without the possibility of interruption or interference by other threads or processes.
- Condition variables: allows threads to wait for a specific condition to be true before continuing execution. It is often used in conjunction with a mutex to avoid busy waiting and improve efficiency.
- In Python you can see threading and multiprocessing modules.
- In Node.js you can work with worker threads, cluster module and shared array buffers.
- Go has incredible goroutines and channels.
- Kotlin provides coroutines.
A process that has made high-level languages very popular - it allows the programmer not to worry about memory allocation and freeing. Be sure to familiarize yourself with the subtleties of its operation in your own language.
Handy tool for analyzing program code and identifying errors.
Depending on what your language uses, you can explore in detail the process of converting your code to machine code (a set of zeros and ones). As a rule, compilation/interpretation/virtualization processes consist of several steps. By understanding them you can optimize your programs for faster builds and efficient execution.
During these long years that programming has existed, a huge amount of code, programs and entire systems have been written. And as a consequence, there have been all sorts of problems in the development of all this. First of all they were related to scaling, support, and the entry threshold for new developers. Clever people, of course, did not sit still and started to solve these problems, thus creating so-called patterns/principles/approaches for writing high-quality code.
By learning programming best practices, you will not only make things better for yourself, but also for others, because other developers will be working with your code.
For many languages there are special style guides and coding conventions. They usually compare the right and wrong way of writing code and explain why this is the case.
Databases (DB) β a set of data that are organized according to certain rules (for example, a library is a database for books).
Database management system (DBMS) is a software that allows you to create a database and manipulate it conveniently (perform various operations on the data). An example of a DBMS is a librarian. He can easily and efficiently work with the books in the library: give out requested books, take them back, add new ones, etc.
Databases can differ significantly from each other and therefore have different areas of application. To understand what database is suitable for this or that task, it is necessary to understand the classification.
These are repositories where data is organized as a set of tables (with rows and columns). Interactions between data are organized on the basis of links between these tables. This type of database provides fast and efficient access to structured information.
Here data is represented as objects with a set of attributes and methods. Suitable for cases where you need high-performance processing of data with a complex structure.
Composed of several parts located on different computers (servers). Such databases may completely exclude information duplication, or completely duplicate it in each distributed copy (for example, as Blockchain).
Stores and processes unstructured or weakly structured data. This type of database is subdivided into subtypes:
- Keyβvalue DB
- Column family DB
- Document-oriented DB (store data as a hierarchy of documents)
- Graph DB (are used for data with a large number of links)
The most popular relational databases: MySQL, PostgreSQL, MariaDB, Oracle. A special language SQL (Structured Query Language) is used to work with these databases. It is quite simple and intuitive.
Learn the basic cycle of creating/receiving/updating/deleting data. Everything else as needed.
Operator
JOIN
; Combinations with other operators;JOIN
types.
References from one table to another; foreign keys.
Query inside another SQL query.
Data structure that allows you to quickly determine the position of the data of interest in the database.
Sequences of commands that must be executed completely, or not executed at all.
START TRANSACTION
COMMIT
and ROLLBACK
To do this, you need to install a database driver (adapter) for your language. (For example psycopg2 for Python, node-postgres for Node.js, pgx for Go)
Writing SQL queries in code is difficult. It's easy to make mistakes and typos in them, because they are just strings that are not validated in any way. To solve this problem, there are so-called ORM libraries, which allow you to execute SQL queries as if you were simply calling methods on an object. Unfortunately, even with them all is not so smooth, because "under the hood" queries that are generated by these libraries are not the most optimal in terms of performance (so be prepared to work with ORM, as well as with pure SQL).
Popular ORMs: SQLAlchemy for Python, Prisma for Node.js, GORM for Go.
MongoDB is a popular NoSQL database that stores data in flexible, JSON-like documents, allowing for dynamic and scalable data structures. It offers high performance, horizontal scalability, and a powerful query language, making it a preferred choice for modern web applications.
Learn the basic cycle of creating/reading/updating/deleting data. Everything else as needed.
MongoDB provides a powerful aggregation framework for performing complex queries and calculations. Learn how to use aggregation pipelines.
Indexing is an important concept in MongoDB for improving performance.
For this you need to install MongoDB driver for your language.
Learn best practices for schema design, indexing, and query optimization. Read up on these to ensure your applications are performant and scalable.
Learn about scaling to handle large datasets and high traffic. MongoDB provides sharding and replica sets for scaling horizontally and vertically.
Redis is a fast data storage working with key-value structures. It can be used as a database, cache, message broker or queue.
String / Bitmap / Bitfield / List / Set / Hash / Sorted sets / Geospatial / Hyperlog / Stream
SET key "value" # setting the key with the value "value"
GET key # retrieve a value from the specified key
SETNX key "data" # setting the value / creation of a key
MSET key1 "1" key2 "2" key3 "3" # setting multiple keys
MGET key1 key2 key3 # getting values for several keys at once
DEL key # remove the key-value pair
INCR someNumber # increase the numeric value by 1
DECR someNumber # decrease the numeric value by 1
EXPIRE key 1000 # set a key life timer of 1000 seconds
TTL key # get information about the lifetime of the key-value pair
# -1 the key exists, but has no expiration date
# -2 the key does not exist
# <another number> key lifetime in seconds
SETEX key 1000 "value" # consolidation of commands SET and EXPIRE
MULTI
β start recording commands for the transaction.
EXEC
β execute the recorded commands.
DISCARD
β delete all recorded commands.
WATCH
β command that provides execution only if other clients have not changed the value of the variable. Otherwise EXEC will not execute the written commands.
ACID is an acronym consisting of the names of the four main properties that guarantee the reliability of transactions in the database.
Guarantees that the transaction will be executed completely or not executed at all.
Ensures that each successful transaction captures only valid results (any inconsistencies are excluded).
Guarantees that one transaction cannot affect the other in any way.
Guarantees that the changes made by the transaction are saved.
Database design is a very important topic that is often overlooked. A well-designed database will ensure long-term scalability and ease of data maintenance. There are several basic steps in database design:
An entity is an object, concept, or event that has its own set of attributes. For example, if you're designing a database for a library, entities might include books, authors, publishers, and borrowers.
Each entity has a set of specific attributes. For example, attributes of a book might include its title, author, ISBN, and publication date. Each attribute has a specific data type, be it a string, an integer, a boolaen, and so on.
Attribute values may have certain limitations. For example, strings can only be unique or have a limit on the maximum number of characters.
Entities can be linked to one another by one type of relationship: one to one, one to many or many to many. For example, a book might have one or more authors, and an author might write one or more books. You can represent these relationships by creating a foreign key in one table that references the primary key in another table.
It is the process of separating data into separate related tables. Normalization eliminates data redundancy and thus avoids data integrity violations when data changes.
Create indexes on frequently queried columns, tune the database configuration, and optimize the queries that you use to access the data.
API (Application Programming Interface) an interface which describes a certain set of rules by which different programs (applications, bots, websites...) can interact with each other. With API calls you can execute certain functions of a program without knowing how it works.
When developing server applications, different API formats can be used, depending on the tasks and requirements.
REST (Representational State Transfer) an architectural approach that describes a set of rules for how a programmer organizes the writing of server application code so that all systems can easily exchange data and the application can be easily scaled. When building a REST API, HTTP protocol methods are widely used.
Basic rules for writing a good REST API:
As a rule, a single URL route is used to work on a particular data model (e.g. for users -
/api/user
). To perform different operations (get/create/edit/delete), this route must implement handlers for the corresponding HTTP methods (GET/POST/PUT/DELETE).
For example, a URL to retrieve one user by id looks like this:
/user/42
, and to retrieve all users like this:/users
.
The most commonly used: 200, 201, 204, 304, 400, 401, 403, 404, 405, 410, 415, 422, 429.
Over time you may want or need to fundamentally change the way your REST API service works. To avoid breaking applications using the current version, you can leave it where it is and implement the new version over a different URL route, e.g.
/api/v2
.
API development and design is a very important and responsible moment, as your API functionality will be used by other developers and systems to integrate with your service. Mistakes made during design can negatively affect not only the growth opportunities of your service, but also many others that depend on yours.
GraphQL is a query language and server-side runtime for APIs that allows you to retrieve and modify data from a server using a single URL endpoint. It provides several benefits, including the ability to retrieve only the data you need (reducing traffic consumption), aggregation of data from multiple sources and a strict type system for describing data.
Learn how to describe data using GraphQL schema and general types.
Queries are used to retrieve data from a server, while Mutations are used to modify (create, update or delete) data on a server.
Resolvers are functions that determine how to retrieve the data for a particular field in the GraphQL schema.
Are places where you retrieve data from, such as databases or APIs. Data sources are connected to the GraphQL server through resolvers.
WebSockets is an advanced technology that allows you to open a persistent bidirectional network connection between the client and the server. With its API you can send a message to the server and receive a response without making an HTTP request, thereby implementing real-time communication.
The basic idea is that you do not need to send requests to the server for new information. When the connection is established, the server itself will send a new batch of data to connected clients as soon as that data is available. Web sockets are widely used to create chat rooms, online games, trading applications, etc.
Sending an HTTP request with a specific set of headers:
Connection: Upgrade
,Upgrade: websocket
,Sec-WebSocket-Key
,Sec-WebSocket-Version
.
CONNECTING
,OPEN
,CLOSING
,CLOSED
.
Open
,Message
,Error
,Close
.
1000
,1001
,1006
,1009
,1011
, etc.
RPC is simply a function call to the server with a set of specific arguments, which returns the response usually encoded in a certain format, such as JSON or XML. There are several protocols that implement RPC.
The are two main protocols: XML-RPC and SOAP (Simple Object Access Protocol)
They are considered deprecated and not recommended for new projects because they are heavyweight and complex compared to newer alternatives such as REST, GraphQL and newer RPC protocols.
A protocol with a very simple specification. All requests and responses are serialized in JSON format.
- A request to the server includes:
method
- the name of the method to be invoked;params
- object or array of values to be passed as parameters to the defined method;id
- identificator used to match the response with the request.- A response includes:
result
- data returned by the invoked method;error
- object with error or null for success;id
- the same as in the request.
RPC framework developed by Google. It works by defining a service using Protocol Buffers, a language-agnostic binary serialization format, that generates to client and server code for various programming languages.
- Understand protobuf fundamentals.
- See turorials for your language: Python, Node.js, Go, Kotlin, etc.
- Learn style guides.
Git a special system for managing the history of changes to the source code. Any changes that are made to Git can be saved, allowing you to rollback (revert) to a previously saved copy of the project. Git is currently the standard for development.
Commit is a record in the repository history that represents information about changes to files.
Branch is a sequence of commits.
A repository is a place where the source code and change history (commits) of your project is stored.
A situation where two branches have different changes in the same location and Git cannot automatically merge them.
A special file to exclude specific files or patterns (e.g., build artifacts) from tracking.
Learn best practices popular in the community.
Docker a special program that allows you to run isolated sandboxes (containers) with different preinstalled environments (be it a specific operating system, a database, etc.). Containerization technology, that Docker provides is similar to virtual machines, but unlike virtual machines, containers use the host OS kernel, which requires far fewer resources.
A special fixed template that contains a description of the environment to run the application (OS, source code, libraries, environment variables, configuration files, etc.). The images can be downloaded from official site and used to create your own.
An isolated environment created from an image. It is essentially a running process on a computer which internally contains the environment described in the image.
docker pull [image_name] # Download the image
docker images # List of available images
docker run [image_id] # Running a container based on the selected image
# Some flags for the run command:
-d # Starting with a return to the console
--name [name] # Name the container
--rm # Remove the container after stopping
-p [local_port][port_iside_container] # Port forwarding
docker build [path_to_Dockerfile] # Creating an image based on a Dockerfile
docker ps # List of running containers
docker ps -a # List of all containers
docker stop [id/container_name] # Stop the container
docker start [id/container_name] # Start an existing container
docker attach [id/container_name] # Connect to the container console
docker logs [id/container_name] # Output the container logs
docker rm [id/container_name] # Delete container
docker container prune # Delete all containers
docker rmi [image_id] # Delete image
Dockerfile is a file with a set of instructions and arguments for creating images.
FROM [image_name] # Setting a base image
WORKDIR [path] # Setting the root directory inside the container
COPY [path_relative_Dockefile] [path_in_container] # Copying files
ADD [path] [path] # Similar to the command above
RUN [command] # A command that runs only when the image is initialized
CMD ["command"] # The command that runs every time you start the container
ENV KEY="VALUE" # Setting Environment Variables
ARG KEY=VALUE # Setting variables to pass to Docker during image building
ENTRYPOINT ["command"] # The command that runs when the container is running
EXPOSE port/protocol # Indicates the need to open a port
VOLUME ["path"] # Creates a mount point for working with persistent storage
A tool for defining and running multi-container Docker applications. It allows you to define the services that make up your application in a single file, and then start and stop all of the services with a single command. In a sense, it is a Dockerfile on maximal.
When creating a server application, it is necessary to test it's workability. This can be done in different ways. One of the easiest is to use the console utility curl. But this is good for very simple applications. Much more efficient is to use special software for testing, which have a user-friendly interface and all the necessary functionality to create collections of queries.
A very popular and feature-rich program. It definitely has everything you might need and more: from the trivial creation of collections to raising mock-servers. The basic functionality of the application is free of charge.
Not as popular, but a very nice tool. The interface in Insomnia, minimalist and clear. It has less functionality, but everything you need: collections, variables, work with GraphQL, gRPC, WebSocket, etc. It is possible to install third-party plugins.
A web server is a program designed to handle incoming HTTP requests. In addition, it can keep error logs (logs), perform authentication and authorization, store rules for file processing, etc.
Not all languages can have a built-in web server (e.g. PHP). Therefore, to run web applications written in such languages, a third-party one is needed.
A single server (virtual or dedicated) can run several applications, but only one external IP address. A configured web server solves this problem and can redirect incoming requests to the right applications.
Nginx β the most popular at the moment.
Apache β also popular, but already giving up its position.
Caddy β a fairly young web server with great potential.
When creating a large-scale backend system, the problem of communication between a large number of microservices may arise. In order not to complicate existing services (establish a reliable communication system, distribute the load, provide for various errors, etc.) you can use a separate service, which is called a message broker (or message queue).
The broker takes the responsibility of creating a reliable and fault-tolerant system of communication between services (performs balancing, guarantees delivery, monitors recipients, maintains logs, buffering, etc.)
A message is an ordinary HTTP request/response with data of a certain format.
Ngrok is a tool for creating public tunnels on the Internet that allows local network applications (web servers, websites, bots, etc.) to be accessible from outside.
Ngrok creates a temporary public URL that can be used to access your local server from the Internet. Once Ngrok is started, you have access to the console, where you can monitor requests, handling and responses to those requests, and configure additional features such as authentication and encryption.
For example, to test web sites and APIs, to demonstrate running applications on a local server, to access local network applications over the Internet without having to set up a router, firewall, proxy server, etc.
Artificial intelligence systems have made an incredible leap recently. Every day there are more and more tools that can write code for you, generate documentation, do code reviews, help you learn new technologies, and so on. Many people are still skeptical about the capabilities and quality of content that AI creates. But at least by now, a lot of time and resources can be saved to increase the productivity of any developer.
The highest quality LLM at the moment. Works like a normal chat bot and has no problem understanding human speech in several languages.
Developed by Goolge as an alternative and direct competitor to ChatGPT.
AI-powered code completion tool developed by GitHub in collaboration with developers of ChatGPT. It integrates with popular code editors and provides real-time suggestions and completions for code as you write.
An alternative to GitHub Copilot that provides context-sensitive code suggestions based on patterns it learns from millions of publicly available code repositories.
An attack that allows an attacker to inject malicious code through a website into the browsers of other users.
An attack is possible if the user input that is passed to the SQL query is able to change the meaning of the statement or add another query to it.
When a site uses a POST request to perform a transaction, the attacker can forge a form, such as in an email, and send it to the victim. The victim, who is an authorized user interacting with this email, can then unknowingly send a request to the site with the data that the attacker has set.
The principle is based on the fact that an invisible layer is placed on top of the visible web page, in which the page the intruder wants is loaded, while the control (button, link) needed to perform the desired action is combined with the visible link or button the user is expected to click on.
A hacker attack that overloads the server running the web application by sending a huge number of requests.
A type of attack in which an attacker gets into the chain between two (or more) communicating parties to intercept a conversation or data transmission.
Using default configuration settings can be dangerous because it is common knowledge. For example, a common vulnerability is that network administrators leave the default logins and passwords admin:admin.
Often your applications may use various tokens (e.g. to access a third-party paid API), logins and passwords (to connect to a database), various secret keys for signatures and so on. All this data should not be known and available to outsiders, so you can't leave them in the program code in any case. To solve this problem, there are environment variables.
.env
file
A special file in which you can store all environment variables.
.env
file
Variables are passed to the program using command line arguments. To do the same with the
.env
file, you need to use a special library for your language.
.env
files
Learn how to upload
.env
files to the hosting services and remember that such files cannot be commited to remote repositories, so do not forget to add them to exceptions via the.gitignore
file.
Cryptographic algorithms based on hash functions are widely used for network security.
The process of converting an array of information (from a single letter to an entire literary work) into a unique short string of characters (called hash), which is unique to that array of information. Moreover, if you change even one character in this information array, the new hash will differ dramatically.
Hashing is an irreversible process, that is, the resulting hash cannot be recovered from the original data.
Hashes can be used as checksums that serve as proof of data integrity.
Cases where hashing different sets of information results in the same hash.
A random string of data, which is added to the input data before hashing, to calculate the hash. This is necessary to make brute-force hacking more difficult.
Popular hashing algorithms:
SHA-256 is the most popular encryption algorithm. It is used, for example, in Bitcoin.
The most popular algorithm of the family is MD5. It is now considered very vulnerable to collisions (there are even collision generators for MD5).
Authentication is a procedure that is usually performed by comparing the password entered by the user with the password stored in the database. Also, this often includes identification - a procedure for identifying the user by his unique identifier (usually a regular login or email). This is needed to know exactly which user is being authenticated.
Authorization - the procedure of granting access rights to a certain user to perform certain operations. For example, ordinary users of the online store can view products and add them to cart. But only administrators can add new products or delete existing ones.
The simplest authentication scheme where the username and password of the user are passed in the Authorization header in unencrypted (base64-encoded) form. It is relatively secure when using HTTPS.
Technology that implements the ability to move from one service to another (not related to the first), without reauthorization.
Authorization protocol, which allows you to register in various applications using popular services (Google, Facebook, GitHub, etc.)
An open standard that allows you to create a single account for authenticating to multiple unrelated services.
An authentication standard based on access tokens. Tokens are created by the server, signed with a secret key and transmitted to the client, who then uses the token to verify his identity.
SSL (Secure Socket Layer) and TLS (Transport Layer Security) are cryptographic protocols that allow secure transmission of data between two computers on a network. These protocols work essentially the same and there are no differences. SSL is considered obsolete, although it is still used to support older devices.
TLS/SSL uses digital certificates issued by a certificate authority. One of the most popular is Letβs Encrypt.
You need to know how to generate certificates and install them properly to make your server work over HTTPS.
To establish a secure connection between the client and the server, a special process must take place which includes the exchange of secret keys and information about encryption algorithms.
Testing is the process of assessing that all parts of the program behave as expected of them. Covering the product with the proper amount of testing, allows you to quickly check later to see if anything in the application is broken after adding new or changing old functionality.
The simplest kind of tests. As a rule, about 70-80% of all tests are exactly unit-tests. "Unit" means that not the whole system is tested, but small and separate parts of it (functions, methods, components, etc.) in isolation from others. All dependent external environment is usually covered by mocks.
To give you an example, let's imagine a car. Its "units" are the engine, brakes, dashboard, etc. You can check them individually before assembly and, if necessary, replace or repair them. But you can assemble the car without having tested the units, and it will not go. You will have to disassemble everything and check every detail.
As a rule, the means of the standard language library are enough to write quality tests. But for more convenient and faster writing of tests, it is better to use third-party tools. For example:
- For Python it uses pytest, although the standard unittest is enough to start with.
- For JavaScript/TypeScript, the best choices are Jest.
- For Go β testify.
- And so on...
Integration testing involves testing individual modules (components) in conjunction with others (that is, in integration). What was covered by a stub during Unit testing is now an actual component or an entire module.
Integration tests are the next step after units. Having tested each component individually, we cannot yet say that the basic functionality of the program works without errors. Potentially, there may still be many problems that will only surface after the different parts of the program interact with each other.
- Big Bang: Most of the modules developed are connected together to form either the whole system or most of it. If everything works, you can save a lot of time this way.
- incremental approach: By connecting two or more logically connected modules and then gradually adding more and more modules until the whole system is tested.
- Bottom-up approach: each module at lower levels is tested with the modules of the next higher level until all modules have been tested.
End-to-end tests imply checking the operation of the entire system as a whole. In this type of testing, the environment is implemented as close to real-life conditions as possible. We can draw the analogy that a robot sits at the computer and presses the buttons in the specified order, as a real user would do.
E2E is the most complex type of test. They take a long time to write and to execute, because they involve the whole application. So if your application is small (e.g. you are the only one developing it), writing Unit and some integration tests will probably be enough.
When you create a large application that needs to serve a large number of requests, there is a need to test this very ability to withstand heavy loads. There are many utilities available to create artificial load.
User-friendly interface, cross-platform, multi-threading support, extensibility, excellent reporting capabilities, support for many protocols for queries.
It has an interesting feature of virtual users, who do something with the application under test in parallel. This allows you to understand how the work of some users actively doing something with the service affects the work of others.
A very powerful tool oriented to more experienced users. The Scala programming language is used to describe the scripts.
A whole framework for easier work on JMeter, Gatling and so on. JSON or YAML is used to describe tests.
Regression testing is a type of testing aimed at detecting errors in already tested portions of the source code.
Statistically, the reappearance of the same bugs in code is quite frequent. And, most interestingly, the patches/fixes issued for them also stop working in time. Therefore it is considered good practice to create a test for it when fixing a bug and run it regularly for next modifications.
Before you can deploy your code, you need to decide where you want to host it. You can rent your own server or use the services of cloud providers, which have great functionality for process automation, monitoring, load balancing, data storing and so on.
Provides a wide range of services for computing, storage, database management, networking, security, and more. AWS is one of the oldest and most established cloud service providers.
It is known for its focus on machine learning and artificial intelligence, as well as its integration with other Google services like Google Analytics and Google Maps.
Azure is known for its integration with other Microsoft services like Office 365 and Dynamics 365, as well as its support for a wide range of programming languages and frameworks.
This service provides virtual private servers (VPS) for developers and small businesses. It is also known for its simplicity and ease of use, as well as its competitive pricing.
Heroku is known for its ease of use and integration with popular development tools like Git, as well as its support for multiple programming languages and frameworks. It was a very popular choice for open source projects as long as there was a free plan (it costs money now).
As a rule, all of these services have an intuitive simple interface, detailed documentation, as well as many video tutorials on YouTube.
Container orchestration is the process of managing and automating the deployment, scaling, and maintenance of containerized applications and dependencies into a portable, lightweight container format to use them in a cluster of machines.
The easiest way to manage containers is to use Docker directly, following a list of rules to keep your applications stable and safe in a production environment.
- Store your Docker images in a private registry to prevent unauthorized access and ensure security.
- Use secure authentication mechanisms for access to your Docker registry and implement security measures such as firewall rules to limit access to your Docker environment.
- Keep the size of your containers as small as possible by minimizing the number of unnecessary packages and dependencies.
- Use separate containers for different services (ex. application server, database, cache, metrics etc.).
- Use Docker volumes to store persistent data such as database files, logs, and configuration files.
It is a native orchestration tool for Docker to manage, scale and automate tasks such as container updates, recovery, traffic balancing, service discovery and so on.
Is a very popular orchestration platform that can work with a variety of container runtimes including Docker. Kubernetes offers a more comprehensive set of features (than Docker swarm), including advanced scheduling, storage orchestration, and self-healing capabilities.
To streamline the process of building, testing, deploying code changes, integrate with other tools in the development ecosystem, such as code repositories, issue trackers, monitoring systems to provide a more comprehensive development workflow you can use some automation tools and services.
CI/CD tool built into the Github platform, which enables developers to automate workflows for their repositories. A great choice if you already use GitHub. There are a large number of pre-built actions. One of the most useful feature is ability to trigger workflows based on various events, such as pull requests or other repository activity.
Highly configurable and extensible open source tool with a large ecosystem of plugins available to customize its functionality. Jenkins can be used in various environments, including on-premise, cloud-based and hybrid setups.
It is a cloud-based CI/CD platform designed to be fast and easy to set up, with a focus on developer productivity. Circle CI integrates with various cloud-based services, such as AWS, Google Cloud and Microsoft Azure. You can also host it locally on your network.
It is also a cloud-based CI/CD platform. It can be easily integrated with GitHub or Bitbucket. Travis CI supports multiple programming languages and frameworks. It also can be hosted as your local platform.
Logs capture detailed information about events, errors, and activities within your applications, facilitating troubleshooting and debugging processes. They provide a historical record of system behavior, allowing you to investigate issues, understand root causes, and improve overall system reliability and stability.
The easiest way to log an application is to use the tools of the standard language library or third-party packages. For example, in Python you can use logging module or Loguru. In Node.js β Winston, Pino. And in Go β log package, Logrus.
Designed to collect log data from various sources and provides fast searching and filtering capabilities.
Comprehensive log management platform that also centralizes log data from different sources. Graylog offers features like log ingestion, indexing, searching, and analysis.
Is a combination of three open-source tools used for log management and analysis. Elasticsearch is a distributed search and analytics engine that stores and indexes logs. Logstash is a log ingestion and processing pipeline that collects, filters, and transforms log data. Kibana is a web interface that allows you to search, visualize, and analyze logs stored in Elasticsearch.
Metrics help track key performance indicators, resource utilization, and system behavior, enabling you to identify bottlenecks, optimize performance, and ensure efficient resource allocation.
Open-source monitoring system that can collect metrics data from various sources. It employs a pull-based model, periodically scraping targets to collect metrics. The collected data is stored in a time-series database, allowing for powerful querying and analysis. Prometheus provides a flexible query language and a user-friendly interface to visualize and monitor metrics. It also includes an alerting system to define and trigger alerts based on specified rules and thresholds.
Tool for visualization and monitoring. It allows you to create visually appealing dashboards and charts to analyze and monitor metrics data from various sources, including databases and monitoring systems like Prometheus and InfluxDB.
Time-series database designed specifically for storing and querying metrics and events data. Offers a simple and flexible query language to extract valuable insights from the stored data. With its focus on time-series data, InfluxDB allows for easy aggregation, downsampling, and retention policies.
Profiling is a program performance analysis, which reveals bottlenecks where the highest CPU and/or memory load occurs.
The information obtained after profiling can be very useful for performance optimization. Profiling can also be useful for debugging the program to find bugs and errors.
As needed - when there are obvious problems or suspicions.
For Python, use: cProfile, line_profiler.
For Node.js: built-in Profiler, Clinic.js, Trace events module.
For Go: runtime/pprof, trace utility.
Benchmark (in software) is a tool for measuring the execution time of program code. As a rule, the measurement is done by multiple runs of the same code (or a certain part of it), where the average time is then calculated, and can also provide information about the number of operations performed and the amount of memory allocated.
Benchmarks are useful for both evaluating performance and choosing the most effective solution to the problem at hand.
For Python: timeit, pytest-benchmark.
For Node.js: console.time, Artillery.
For Go: testing.B, Benchstat.
There are benchmarks to measure the performance of networked applications, where you can get detailed information about the average request processing time, the maximum number of supported connections, data transfer rates and so on (see list of HTTP benchmarks).
Caching is one of the most effective solutions for optimizing the performance of web applications. With caching, you can reuse previously received resources (static files), thereby reducing latency, reducing network traffic, and reducing the time it takes to fully load content.
A system of servers located around the world. Such servers allow you to store duplicate static content and deliver it much faster to users who are in close geographical proximity. Also when using CDN reduces the load on the main server.
Based on loading pages and other static data from the local cache. To do this, the browser (client) is given special headers: 304 Not Modified, Expires, Strict-Transport-Security.
A daemon program that implements high-performance RAM caching based on key-value pairs. Unlike Redis it cannot be a reliable and long-term storage, so it is only suitable for caches.
When the entire application code is maximally optimized and the server capacity is reaching its limits, and the load keeps growing, you have to resort to the clustering and balancing mechanisms. The idea is to combine groups of servers into clusters, where the load is distributed between them using special methods and algorithms, called balancing.
- DNS Balancing. For one domain name is allocated several IP-addresses and the server to which the request will be redirected is determined by an algorithm Round Robin.
- Building a NLB cluster. Used to manage two or more servers as one virtual cluster.
- Balancing by territory. An example is the Anycast mailing method.
Communication with the client is locked to the balancer, which acts as a proxy. It communicates with servers on its own behalf, passing information about the client in additional data and headers. Example β HAProxy.
The balancer analyzes client requests and redirects them to different servers depending on the nature of the requested content. Examples are Upstream module in Nginx (which is responsible for balancing) and pgpool from the PostgreSQL database (for example, it can be used to distribute read requests to one server and write requests to another).
- Round Robin. Each request is sent in turn to each server (first to the first, then to the second and so on in a circle).
- Weighted Round Robin. Improved algorithm Round Robin, which also takes into account the performance of the server.
- Least Connections. Each subsequent request is sent to the server with the smallest number of supported connections.
- Destination Hash Scheduling. The server that processes the request is selected from a static table based on the recipient's IP address.
- Source Hash Scheduling. The server that will process the request is selected from the table by the sender's IP address.
- Sticky Sessions. Requests are distributed based on the user's IP address. Sticky Sessions assumes that requests from the same client will be routed to the same server rather than bouncing around in a pool.
A standard in the development world. An incredibly simple, yet powerful markup language for describing your projects. As a matter of fact, the resource you are reading right now is written with Markdown.
A cheatsheet on all the syntactic possibilities of the language.
A collection of various resources for working with Markdown.
A collection of beautifull README.md files (this is the main file of any repository on GitHub that uses Markdown).
Markdown is not only used for writing documentation. This incredible tool is great for learning - creating digital notes. Personally, I use Obsidian editor for outlining new material.
For every modern programming language there are special tools which allow you to write documentation directly in the program code. So you can read the description of methods, functions, structures and so on right inside your IDE. As a rule, this kind of documentation is done in the form of ordinary comments, taking into account some syntactic peculiarities.
To make your work and the work of other developers easier. In the long run this will save more time than traveling through the code to figure out how everything works, what parameters to pass to functions or to find out what methods this or that class has. Over time you will inevitably forget your own code, so already written documentation will be useful to you personally.
For each language, it's different. Many have their own well-established approaches:
- Docstring for Python.
- JSDoc for JavaScript.
- Godoc for Go.
- KDoc and Dokka for Kotlin.
- Javadoc for Java.
- And look for others on request:
documentation engine for <your lang>
.
Easy-to-understand documentation will allow other users to understand and use your product faster. Writing documentation from scratch is a tedious process. There are common specifications and auto-generation tools to solve this problem.
A specification that describes how the API should be documented so that it is readable by humans and machines alike.
A set of tools that allows you to create convenient API documentation based on the OpenAPI specification.
A tool that allows you to automatically generate interactive documentation, which you can not only read but also actively interact with it (send HTTP requests).
A kind of playground in which you can write documentation and immediately see the result of the generated page. You can use YAML or JSON format file for this.
Allows you to automatically create API client libraries, server stubs and documentation.
Over time, when your project grows and has many modules, one README page on GitHub may not be enough. It will be appropriate to create a separate site for the documentation of your project. You don't need to learn how to make it, because there are many generators for creating nice-looking and handy documentation.
Probably the most popular documentation generator using GitHub/Git and Markdown.
Open-source generator from Facebook (Meta).
A simple and widely customizable Markdown documentation generator.
Minimalistic documentation generator for REST API.
Another simple, light and minimalistic static generator.
A generator with a modern and advanced design.
A static generator from the developers of the Rust language.
Used to structure programs that can be decomposed into groups of subtasks, each of which is at a particular level of abstraction. Each layer provides services to the next higher layer.
The server component will provide services to multiple client components. Clients request services from the server and the server provides relevant services to those clients.
The master component distributes the work among identical slave components, and computes a final result from the results which the slaves return.
Each processing step is enclosed within a filter component. Data to be processed is passed through pipes. These pipes can be used for buffering or for synchronization purposes.
A broker component is responsible for the coordination of communication among components.
Peers may function both as a client, requesting services from other peers, and as a server, providing services to other peers. A peer may act as a client or as a server or as both, and it can change its role dynamically with time.
Has 4 major components; event source, event listener, channel and event bus. Sources publish messages to particular channels on an event bus.
Separate internal representations of information from the ways information is presented to, and accepted from, the user.
Useful for problems for which no deterministic solution strategies are known.
Used for designing a component that interprets programs written in a dedicated language.
Provide various object creation mechanisms, which increase flexibility and reuse of existing code.
Explain how to assemble objects and classes into larger structures, while keeping these structures flexible and efficient.
Concerned with algorithms and the assignment of responsibilities between objects.
A monolith is a complete application that contains a single code base (written in a single technology stack and stored in a single repository) and has a single entry point to run the entire application. This is the most common approach for building applications alone or with a small team.
- Ease of development (everything in one style and in one place).
- Ease of deployment.
- Easy to scale at the start.
- Increasing complexity (as the project grows, the entry threshold for new developers increases).
- Time to assemble and start up is growing.
- Making it harder to add new functionality that affects old functionality.
- It is difficult (or impossible) to apply new technologies.
A microservice is also a complete application with a single code base. But, unlike a monolith, such an application is responsible for only one functional unit. That is, it is a small service that solves only one task, but well.
- Each individual microservice can have its own technology stack and be developed independently.
- Easy to add new functionality (just create a new microservice).
- A lower entry threshold for new developers.
- Low time required for buildings and startups.
- The complexity of implementing interaction between all microservices.
- More difficult to operate than several copies of the monolith.
- Complexity of performing transactions.
- Changes affecting multiple microservices must be coordinated.
Over time, when the load on your application starts to grow (more users come, new functionality appears and, as a consequence, more CPU time is involved), it becomes necessary to increase the server capacity. There are 2 main approaches for this:
It means increasing the capacity of the existing server. For example, this may include increasing the size of RAM, installing faster storage or increasing its volume, as well as the purchase of a new processor with a high clock frequency and/or a large number of cores and threads. Vertical scaling has its own limit, because we cannot increase the capacity of a single server for a long time.
The process of deploying new servers. This approach requires building a robust and scalable architecture that allows you to distribute the logic of the entire application across multiple physical machines.