Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
:information_source: This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE
:bar_chart: There are currently 2386 exercises and questions
:books: To learn more about DevOps and SRE, check the resources in devops-resources repository
:warning: You can use these for preparing for an interview but most of the questions and exercises don't represent an actual interview. Please read FAQ page for more details
:stop_sign: If you are interested in pursuing a career as DevOps engineer, learning some of the concepts mentioned here would be useful, but you should know it's not about learning all the topics and technologies mentioned in this repository
:pencil: You can add more exercises by submitting pull requests :) Read about contribution guidelines here
A set of protocols that define how two or more devices can communicate with each other. To learn more about TCP/IP, read here
Ethernet simply refers to the most common type of Local Area Network (LAN) used today. A LAN—in contrast to a WAN (Wide Area Network), which spans a larger geographical area—is a connected network of computers in a small area, like your office, college campus, or even home.
A MAC address is a unique identification number or code used to identify individual devices on the network.
Packets that are sent on the ethernet are always coming from a MAC address and sent to a MAC address. If a network adapter is receiving a packet, it is comparing the packet’s destination MAC address to the adapter’s own MAC address.
When a device sends a packet to the broadcast MAC address (FF:FF:FF:FF:FF:FF), it is delivered to all stations on the local network. Ethernet broadcasts are used to resolve IP addresses to MAC addresses (by ARP) at the datalink layer .
An Internet Protocol address (IP address) is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication.An IP address serves two main functions: host or network interface identification and location addressing.
A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address. Subnet Mask is made by setting network bits to all "1"s and setting host bits to all "0"s. Within a given network, two host addresses are reserved for special purpose, and cannot be assigned to hosts. The "0" address is assigned a network address and "255" is assigned to a broadcast address, and they cannot be assigned to hosts.
For Example
| Address Class | No of Network Bits | No of Host Bits | Subnet mask | CIDR notation |
| ------------- | ------------------ | --------------- | --------------- | ------------- |
| A | 8 | 24 | 255.0.0.0 | /8 |
| A | 9 | 23 | 255.128.0.0 | /9 |
| A | 12 | 20 | 255.240.0.0 | /12 |
| A | 14 | 18 | 255.252.0.0 | /14 |
| B | 16 | 16 | 255.255.0.0 | /16 |
| B | 17 | 15 | 255.255.128.0 | /17 |
| B | 20 | 12 | 255.255.240.0 | /20 |
| B | 22 | 10 | 255.255.252.0 | /22 |
| C | 24 | 8 | 255.255.255.0 | /24 |
| C | 25 | 7 | 255.255.255.128 | /25 |
| C | 28 | 4 | 255.255.255.240 | /28 |
| C | 30 | 2 | 255.255.255.252 | /30 |
You can read more about the OSI model in penguintutor.com
Unitcast: One to one communication where there is one sender and one receiver.
Broadcast: Sending a message to everyone in the network. The address ff:ff:ff:ff:ff:ff is used for broadcasting. Two common protocols which use broadcast are ARP and DHCP.
Multicast: Sending a message to a group of subscribers. It can be one-to-many or many-to-many.
CSMA/CD stands for Carrier Sense Multiple Access / Collision Detection. Its primarily focus it to manage access to shared medium/bus where only one host can transmit at a given point of time.
CSMA/CD algorithm:
A router is a physical or virtual appliance that passes information between two or more packet-switched computer networks. A router inspects a given data packet's destination Internet Protocol address (IP address), calculates the best way for it to reach its destination and then forwards it accordingly.
Network Address Translation (NAT) is a process in which one or more local IP address is translated into one or more Global IP address and vice versa in order to provide Internet access to the local hosts.
A proxy server acts as a gateway between you and the internet. It’s an intermediary server separating end users from the websites they browse.
If you’re using a proxy server, internet traffic flows through the proxy server on its way to the address you requested. The request then comes back through that same proxy server (there are exceptions to this rule), and then the proxy server forwards the data received from the website to you.
Proxy servers provide varying levels of functionality, security, and privacy depending on your use case, needs, or company policy.
TCP 3-way handshake or three-way handshake is a process which is used in a TCP/IP network to make a connection between server and client.
A three-way handshake is primarily used to create a TCP socket connection. It works when:
From wikipedia: "the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgement of that signal to be received"
Bonus question: what is the RTT of LAN?
TCP establishes a connection between the client and the server to guarantee the order of the packages, on the other hand, UDP does not establish a connection between client and server and doesn't handle package order. This makes UDP more lightweight than TCP and a perfect candidate for services like streaming.
Penguintutor.com provides a good explanation.
A default gateway serves as an access point or IP router that a networked computer uses to send information to a computer in another network or the internet.
ARP stands for Address Resolution Protocol. When you try to ping an IP address on your local network, say 192.168.1.1, your system has to turn the IP address 192.168.1.1 into a MAC address. This involves using ARP to resolve the address, hence its name.
Systems keep an ARP look-up table where they store information about what IP addresses are associated with what MAC addresses. When trying to send a packet to an IP address, the system will first consult this table to see if it already knows the MAC address. If there is a value cached, ARP is not used.
It stands for Dynamic Host Configuration Protocol, and allocates IP addresses, subnet masks and gateways to hosts. This is how it works:
Read more here
NAT stands for network address translation. It’s a way to map multiple local private addresses to a public one before transferring the information. Organizations that want multiple devices to employ a single IP address use NAT, as do most home routers. For example, your computer's private IP could be 192.168.1.100, but your router maps the traffic to it's public IP (e.g. 1.1.1.1). Any device on the internet would see the traffic coming from your public IP (1.1.1.1) instead of your private IP (192.168.1.100).
The control plane is the part of the network that decides how to route and forward packets to a different location.
The data plane is the part of the network that actually forwards the data/packets.
Refers to monitoring and management functions.
Control Plane.
Latency is the time taken for an information to reach its destination from the source.
Bandwidth is the capacity of a communication channel to measure how much data the latter can handle over a specific time period. More bandwidth would imply more traffic handling and thus more data transfer.
Throughput refers to the measurement of the real amount of data transferred over a certain period of time across any transmission channel.
Latency. To have a good latency, a search query should be forwarded to the closest datacenter.
Throughput. To have a good throughput, the upload stream should be routed to an underutilized link.
00110011110100011101
The internet refers to network of networks, transferring huge amounts of data around the globe.
The World Wide Web is an application running on millions of server, on top of the internet, accessed through what is know as the web browser
ISP (Internet Service Provider) is the local internet company provider.
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Fork 101 | Fork | Link | Link | |
Fork 102 | Fork | Link | Link |
From the book "Operating Systems: Three Easy Pieces":
"responsible for making it easy to run programs (even allowing you to seemingly run many at the same time), allowing programs to share memory, enabling programs to interact with devices, and other fun stuff like that".
A process is a running program. A program is one or more instructions and the program (or process) is executed by the operating system.
It would support the following:
False. It was true in the past but today's operating systems perform lazy loading which means only the relevant pieces required for the process to run are loaded first.
Even when using a system with one physical CPU, it's possible to allow multiple users to work on it and run programs. This is possible with time sharing where computing resources are shared in a way it seems to the user the system has multiple CPUs but in fact it's simply one CPU shared by applying multiprogramming and multi-tasking.
Somewhat the opposite of time sharing. While in time sharing a resource is used for a while by one entity and then the same resource can be used by another resource, in space sharing the space is shared by multiple entities but in a way where it's not being transferred between them.
It's used by one entity until this entity decides to get rid of it. Take for example storage. In storage, a file is yours until you decide to delete it.
CPU scheduler
The kernel is part of the operating system and is responsible for tasks like:
True
Buffer: Reserved place in RAM which is used to hold data for temporary purposes Cache: Cache is usually used when processes reading and writing to the disk to make the process faster by making similar data used by different programs easily accessible.
Virtualization uses software to create an abstraction layer over computer hardware that allows the hardware elements of a single computer—processors, memory, storage and more - to be divided into multiple virtual computers, commonly called virtual machines (VMs).
Red Hat: "A hypervisor is software that creates and runs virtual machines (VMs). A hypervisor, sometimes called a virtual machine monitor (VMM), isolates the hypervisor operating system and resources from the virtual machines and enables the creation and management of those VMs."
Read more here
Hosted hypervisors and bare-metal hypervisors.
Due to having its own drivers and a direct access to hardware components, a baremetal hypervisor will often have better performances along with stability and scalability.
On the other hand, there will probably be some limitation regarding loading (any) drivers so a hosted hypervisor will usually benefit from having a better hardware compatibility.
Operating system virtualization Network functions virtualization Desktop virtualization
Yes, it's a operating-system-level virtualization, where the kernel is shared and allows to use multiple isolated user-spaces instances.
The introduction of virtual machines allowed companies to deploy multiple business applications on the same hardware while each application is separated from each other in secured way, where each is running on its own separate operating system.
In the following block of code x
is a class attribute while self.y
is a instance attribute
class MyClass(object):
x = 1
def __init__(self, y):
self.y = y
# Note that you generally don't need to know the compiling process but knowing where everything comes from
# and giving complete answers shows that you truly know what you are talking about.
Generally, every compiling process have a two steps.
- Analysis
- Code Generation.
Analysis can be broken into:
1. Lexical analysis (Tokenizes source code)
2. Syntactic analysis (Check whether the tokens are legal or not, tldr, if syntax is correct)
for i in 'foo'
^
SyntaxError: invalid syntax
We missed ':'
3. Semantic analysis (Contextual analysis, legal syntax can still trigger errors, did you try to divide by 0,
hash a mutable object or use an undeclared function?)
1/0
ZeroDivisionError: division by zero
These three analysis steps are the responsible for error handlings.
The second step would be responsible for errors, mostly syntax errors, the most common error.
The third step would be responsible for Exceptions.
As we have seen, Exceptions are semantic errors, there are many builtin Exceptions:
ImportError
ValueError
KeyError
FileNotFoundError
IndentationError
IndexError
...
You can also have user defined Exceptions that have to inherit from the `Exception` class, directly or indirectly.
Basic example:
class DividedBy2Error(Exception):
def __init__(self, message):
self.message = message
def division(dividend,divisor):
if divisor == 2:
raise DividedBy2Error('I dont want you to divide by 2!')
return dividend / divisor
division(100, 2)
>>> __main__.DividedBy2Error: I dont want you to divide by 2!
Exceptions: Errors detected during execution are called Exceptions.
Handling Exception: When an error occurs, or exception as we call it, Python will normally stop and generate an error message.
Exceptions can be handled using try
and except
statement in python.
Example: Following example asks the user for input until a valid integer has been entered. If user enter a non-integer value it will raise exception and using except it will catch that exception and ask the user to enter valid integer again.
while True:
try:
a = int(input("please enter an integer value: "))
break
except ValueError:
print("Ops! Please enter a valid integer value.")
For more details about errors and exceptions follow this https://docs.python.org/3/tutorial/errors.html
def true_or_false():
try:
return True
finally:
return False
It is used to emulate callable objects. It allows a class instance to be called as a function.
class Foo:
def __init__(self: object) -> None:
pass
def __call__(self: object) -> None:
print("Called!")
f = Foo()
f()
Called!
A lambda
expression is an 'anonymous' function, the difference from a normal defined function using the keyword `def`` is the syntax and usage.
The syntax is:
lambda[parameters]: [expresion]
Examples:
x = lambda a: a + 10
print(x(10))
addition = lambda x, y: x + y
print(addition(10, 20))
square = lambda x : x ** 2
print(square(5))
Generally it is considered a bad practice under PEP 8 to assign a lambda expresion, they are meant to be used as parameters and inside of other defined functions.
x, y = y, x
First you ask the user for the amount of numbers that will be use. Use a while loop that runs until amount_of_numbers becomes 0 through subtracting amount_of_numbers by one each loop. In the while loop you want ask the user for a number which will be added a variable each time the loop runs.
def return_sum():
amount_of_numbers = int(input("How many numbers? "))
total_sum = 0
while amount_of_numbers != 0:
num = int(input("Input a number. "))
total_sum += num
amount_of_numbers -= 1
return total_sum
li = [2, 5, 6]
print("{0:.3f}".format(sum(li)/len(li)))
A tuple is a built-in data type in Python. It's used for storing multiple items in a single variable.
List, as opposed to a tuple, is a mutable data type. It means we can modify it and at items to it.
x = [1, 2, 3]
x.append(2)
some_list[-1]
Don't use append
unless you would like the list as one item.
my_list[0:3] = []
numbers = [1, 2, 3, 4, 5]
numbers.insert(0, 0)
print(numbers)
numbers_1 = [2, 3, 4, 5]
numbers_2 = [0, 1]
numbers_1 = numbers_2 + numbers_1
print(numbers_1)
sorted_li = sorted(li, key=len)
Or without creating a new list:
li.sort(key=len)
sorted(list) will return a new list (original list doesn't change)
list.sort() will return None but the list is change in-place
sorted() works on any iterable (Dictionaries, Strings, ...)
list.sort() is faster than sorted(list) in case of Lists
[['1', '2', '3'], ['4', '5', '6']]
nested_li = [['1', '2', '3'], ['4', '5', '6']]
[[int(x) for x in li] for li in nested_li]
sorted(li1 + li2)
Another way:
i, j = 0
merged_li = []
while i < len(li1) and j < len(li2):
if li1[i] < li2[j]:
merged_li.append(li1[i])
i += 1
else:
merged_li.append(li2[j])
j += 1
merged_li = merged_li + merged_li[i:] + merged_li[j:]
There are many ways of solving this problem:
# Note: :list and -> bool are just python typings, they are not needed for the correct execution of the algorithm.
Taking advantage of sets and len:
def is_unique(l:list) -> bool:
return len(set(l)) == len(l)
This one is can be seen used in other programming languages.
def is_unique2(l:list) -> bool:
seen = []
for i in l:
if i in seen:
return False
seen.append(i)
return True
Here we just count and make sure every element is just repeated once.
def is_unique3(l:list) -> bool:
for i in l:
if l.count(i) > 1:
return False
return True
This one might look more convulated but hey, one liners.
def is_unique4(l:list) -> bool:
return all(map(lambda x: l.count(x) < 2, l))
def my_func(li = []):
li.append("hmm")
print(li)
If we call it 3 times, what would be the result each call?
['hmm']
['hmm', 'hmm']
['hmm', 'hmm', 'hmm']
for item in some_list:
print(item)
for i, item in enumerate(some_list):
print(i)
Using range like this
for i in range(1, len(some_list)):
some_list[i]
Another way is using slicing
for i in some_list[1:]:
Method 1
for i in reversed(li):
...
Method 2
n = len(li) - 1
while n > 0:
...
n -= 1
li = [[1, 4], [2, 1], [3, 9], [4, 2], [4, 5]]
sorted(li, key=lambda l: l[1])
or
li.sort(key=lambda l: l[1)
nums = [1, 2, 3]
letters = ['x', 'y', 'z']
list(zip(nums, letters))
From Docs: "List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.".
It's better because they're compact, faster and have better readability.
number_lists = [[1, 7, 3, 1], [13, 93, 23, 12], [123, 423, 456, 653, 124]]
odd_numbers = []
for number_list in number_lists:
for number in number_list:
if number % 2 == 0:
odd_numbers.append(number)
print(odd_numbers)
number_lists = [[1, 7, 3, 1], [13, 93, 23, 12], [123, 423, 456, 653, 124]]
odd_numbers = [number for number_list in number_lists for number in number_list if number % 2 == 0]
print(odd_numbers)
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
Extract all type of foods. Final output should be: {'mushrooms', 'goombas', 'turtles'}brothers_menu = \
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
# "Classic" Way
def get_food(brothers_menu) -> set:
temp = []
for brother in brothers_menu:
for food in brother['food']:
temp.append(food)
return set(temp)
# One liner way (Using list comprehension)
set([food for bro in x for food in bro['food']])
my_dict = dict(x=1, y=2) OR my_dict = {'x': 1, 'y': 2} OR my_dict = dict([('x', 1), ('y', 2)])
del my_dict['some_key']
you can also use my_dict.pop('some_key')
which returns the value of the key.
{k: v for k, v in sorted(x.items(), key=lambda item: item[1])}
dict(sorted(some_dictionary.items()))
some_dict1.update(some_dict2)
{'a': {'b': {'c': 1}}}
output = {}
string = "a.b.c"
path = string.split('.')
target = reduce(lambda d, k: d.setdefault(k, {}), path[:-1], output)
target[path[-1]] = 1
print(output)
with open('file.txt', 'w') as file:
file.write("My insightful comment")
import json
with open('file.json', 'w') as f:
f.write(json.dumps(dict_var))
import os
print(os.getcwd())
/dir1/dir2/file1
print the file name (file1)import os
print(os.path.basename('/dir1/dir2/file1'))
# Another way
print(os.path.split('/dir1/dir2/file1')[1])
/dir1/dir2/file1
import os
## Part 1.
# os.path.dirname gives path removing the end component
dirpath = os.path.dirname('/dir1/dir2/file1')
print(dirpath)
## Part 2.
print(os.path.basename(dirpath))
/home
and luig
will result in /home/luigi
Using the re module
While you iterate through the characters, store them in a dictionary and check for every character whether it's already in the dictionary.
def firstRepeatedCharacter(str):
chars = {}
for ch in str:
if ch in chars:
return ch
else:
chars[ch] = 0
x = "itssssssameeeemarioooooo"
y = ''.join(set(x))
def permute_string(string):
if len(string) == 1:
return [string]
permutations = []
for i in range(len(string)):
swaps = permute_string(string[:i] + string[(i+1):])
for swap in swaps:
permutations.append(string[i] + swap)
return permutations
print(permute_string("abc"))
Short way (but probably not acceptable in interviews):
from itertools import permutations
[''.join(p) for p in permutations("abc")]
Detailed answer can be found here: http://codingshell.com/python-all-string-permutations
You can use the "count" method like this:
ImAString.count(" ")
>> ', '.join(["One", "Two", "Three"])
>> " ".join("welladsadgadoneadsadga".split("adsadga")[:2])
>> "".join(["c", "t", "o", "a", "o", "q", "l"])[0::2]
>>> 'One, Two, Three'
>>> 'well done'
>>> 'cool'
x = "pizza"
, what would be the result of x[::-1]
?It will reverse the string, so x would be equal to azzip
.
"".join(["a", "h", "m", "a", "h", "a", "n", "q", "r", "l", "o", "i", "f", "o", "o"])[2::3]
mario
for i in range(3, 3):
print(i)
No output :)
yeild
? When would you use it?[['Mario', 90], ['Geralt', 82], ['Gordon', 88]]
How to sort the list by the numbers in the nested lists?One way is:
the_list.sort(key=lambda x: x[1])
For the following slicing exercises, assume you have the following list: my_list = [8, 2, 1, 10, 5, 4, 3, 9]
pdb :D
return
returns?Short answer is: It returns a None object.
We could go a bit deeper and explain the difference between
def a ():
return
>>> None
And
def a ():
pass
>>> None
Or we could be asked this as a following question, since they both give the same result.
We could use the dis module to see what's going on:
2 0 LOAD_CONST 0 (<code object a at 0x0000029C4D3C2DB0, file "<dis>", line 2>)
2 LOAD_CONST 1 ('a')
4 MAKE_FUNCTION 0
6 STORE_NAME 0 (a)
5 8 LOAD_CONST 2 (<code object b at 0x0000029C4D3C2ED0, file "<dis>", line 5>)
10 LOAD_CONST 3 ('b')
12 MAKE_FUNCTION 0
14 STORE_NAME 1 (b)
16 LOAD_CONST 4 (None)
18 RETURN_VALUE
Disassembly of <code object a at 0x0000029C4D3C2DB0, file "<dis>", line 2>:
3 0 LOAD_CONST 0 (None)
2 RETURN_VALUE
Disassembly of <code object b at 0x0000029C4D3C2ED0, file "<dis>", line 5>:
6 0 LOAD_CONST 0 (None)
2 RETURN_VALUE
An empty return
is exactly the same as return None
and functions without any explicit return
will always return None regardless of the operations, therefore
def sum(a, b):
global c
c = a + b
>>> None
li = []
for i in range(1, 10):
li.append(i)
[i for i in range(1, 10)]
def is_int(num):
if isinstance(num, int):
print('Yes')
else:
print('No')
What would be the result of is_int(2) and is_int(False)?
The reason we need to implement in the first place, it's because a linked list isn't part of Python standard library.
To implement a linked list, we have to implement two structures: The linked list itself and a node which is used by the linked list.
Let's start with a node. A node has some value (the data it holds) and a pointer to the next node
class Node(object):
def __init__(self, data):
self.data = data
self.next = None
Now the linked list. An empty linked list has nothing but an empty head.
class LinkedList(object):
def __init__(self):
self.head = None
Now we can start using the linked list
ll = Linkedlist()
ll.head = Node(1)
ll.head.next = Node(2)
ll.head.next.next = Node(3)
What we have is:
| 1 | -> | 2 | -> | 3 |
Usually, more methods are implemented, like a push_head() method where you insert a node at the beginning of the linked list
def push_head(self, value):
new_node = Node(value)
new_node.next = self.head
self.head = new_node
def print_list(self): node = self.head while(node): print(node.data) node = node.next
Let's use the Floyd's Cycle-Finding algorithm:
def loop_exists(self):
one_step_p = self.head
two_steps_p = self.head
while(one_step_p and two_steps_p and two_steps_p.next):
one_step_p = self.head.next
two_step_p = self.head.next.next
if (one_step_p == two_steps_p):
return True
return False
PEP8 is a list of coding conventions and style guidelines for Python
5 style guidelines:
1. Limit all lines to a maximum of 79 characters.
2. Surround top-level function and class definitions with two blank lines.
3. Use commas when making a tuple of one element
4. Use spaces (and not tabs) for indentation
5. Use 4 spaces per indentation level
assert
does in Python?assert
in non-test/production code?There are multiple ways to map a URL with a function in Python.
app
. This app
decorator is the instance of the Flask
class. And route it's a method of this class.@app.route('/')
def home():
return 'main website'
add_url_rule
method: This is a method of the Flask class. We can also use it for map the URL with a function.def home():
return 'main website'
app.add_url_rule('/', view_func=home)
x = [1, 2, 3]
, what is the result of list(zip(x))?[(1,), (2,), (3,)]
list(zip(range(5), range(50), range(50)))
list(zip(range(5), range(50), range(-2)))
[(0, 0, 0), (1, 1, 1), (2, 2, 2), (3, 3, 3), (4, 4, 4)]
[]
a.num2
assuming the following code
class B:
def __get__(self, obj, objtype=None):
reuturn 10
class A:
num1 = 2
num2 = Five()
some_car = Car("Red", 4)
assuming the following code
class Print:
def __get__(self, obj, objtype=None):
value = obj._color
print("Color was set to {}".format(valie))
return value
def __set__(self, obj, value):
print("The color of the car is {}".format(value))
obj._color = value
class Car:
color = Print()
def __ini__(self, color, age):
self.color = color
self.age = age
def add(num1, num2):
return num1 + num2
def sub(num1, num2):
return num1 - num2
def mul(num1, num2):
return num1*num2
def div(num1, num2):
return num1 / num2
operators = {
'+': add,
'-': sub,
'*': mul,
'/': div
}
if __name__ == '__main__':
operator = str(input("Operator: "))
num1 = int(input("1st number: "))
num2 = int(input("2nd number: "))
print(operators[operator](num1, num2))
This is a good reference https://docs.python.org/3/library/datatypes.html
def wee(word):
return word
def oh(f):
return f + "Ohh"
>>> oh(wee("Wee"))
<<< Wee Ohh
This allows us to control the before execution of any given function and if we added another function as wrapper, (a function receiving another function that receives a function as parameter) we could also control the after execution.
Sometimes we want to control the before-after execution of many functions and it would get tedious to write
f = function(function_1())
f = function(function_1(function_2(*args)))
every time, that's what decorators do, they introduce syntax to write all of this on the go, using the keyword '@'.
These two decorators (ntimes and timer) are usually used to display decorators functionalities, you can find them in lots of
tutorials/reviews. I first saw these examples two years ago in pyData 2017. https://www.youtube.com/watch?v=7lmCu8wz8ro&t=3731s
Simple decorator:
def deco(f):
print(f"Hi I am the {f.__name__}() function!")
return f
@deco
def hello_world():
return "Hi, I'm in!"
a = hello_world()
print(a)
>>> Hi I am the hello_world() function!
Hi, I'm in!
This is the simplest decorator version, it basically saves us from writting a = deco(hello_world())
.
But at this point we can only control the before execution, let's take on the after:
def deco(f):
def wrapper(*args, **kwargs):
print("Rick Sanchez!")
func = f(*args, **kwargs)
print("I'm in!")
return func
return wrapper
@deco
def f(word):
print(word)
a = f("************")
>>> Rick Sanchez!
************
I'm in!
deco receives a function -> f wrapper receives the arguments -> *args, **kwargs
wrapper returns the function plus the arguments -> f(*args, **kwargs) deco returns wrapper.
As you can see we conveniently do things before and after the execution of a given function.
For example, we could write a decorator that calculates the execution time of a function.
import time
def deco(f):
def wrapper(*args, **kwargs):
before = time.time()
func = f(*args, **kwargs)
after = time.time()
print(after-before)
return func
return wrapper
@deco
def f():
time.sleep(2)
print("************")
a = f()
>>> 2.0008859634399414
Or create a decorator that executes a function n times.
def n_times(n):
def wrapper(f):
def inner(*args, **kwargs):
for _ in range(n):
func = f(*args, **kwargs)
return func
return inner
return wrapper
@n_times(4)
def f():
print("************")
a = f()
>>>************
************
************
************
class Car:
def __init__(self, model, color):
self.model = model
self.color = color
def __eq__(self, other):
if not isinstance(other, Car):
return NotImplemented
return self.model == other.model and self.color == other.color
>> a = Car('model_1', 'red')
>> b = Car('model_2', 'green')
>> c = Car('model_1', 'red')
>> a == b
False
>> a == c
True
tail
command in Python? Bonus: implement head
as wellGoogle: "Monitoring is one of the primary means by which service owners keep track of a system’s health and availability".
This approach require from a human to always check why the value exceeded and how to handle it while today, it is more effective to notify people only when they need to take an actual action. If the issue doesn't require any human intervention, then the problem can be fixed by some processes running in the relevant environment.
Alerts
Tickets
Logging
From Prometheus documentation: "if you need 100% accuracy, such as for per-request billing".
Prometheus server is responsible for scraping and storing the data
Push gateway is used for short-lived jobs
Alert manager is responsible for alerts ;)
Go also has good community.
var x int = 2
and x := 2
?The result is the same, a variable with the value 2.
With var x int = 2
we are setting the variable type to integer while with x := 2
we are letting Go figure out by itself the type.
False. We can't redeclare variables but yes, we must used declared variables.
This should be answered based on your usage but some examples are:
func main() {
var x float32 = 13.5
var y int
y = x
}
package main
import "fmt"
func main() {
var x int = 101
var y string
y = string(x)
fmt.Println(y)
}
It looks what unicode value is set at 101 and uses it for converting the integer to a string.
If you want to get "101" you should use the package "strconv" and replace y = string(x)
with y = strconv.Itoa(x)
package main
func main() {
var x = 2
var y = 3
const someConst = x + y
}
Constants in Go can only be declared using constant expressions.
But x
, y
and their sum is variable.
const initializer x + y is not a constant
package main
import "fmt"
const (
x = iota
y = iota
)
const z = iota
func main() {
fmt.Printf("%v\n", x)
fmt.Printf("%v\n", y)
fmt.Printf("%v\n", z)
}
Go's iota identifier is used in const declarations to simplify definitions of incrementing numbers. Because it can be used in expressions, it provides a generality beyond that of simple enumerations.
x
and y
in the first iota group, z
in the second.
Iota page in Go Wiki
It avoids having to declare all the variables for the returns values.
It is called the blank identifier.
answer in SO
package main
import "fmt"
const (
_ = iota + 3
x
)
func main() {
fmt.Printf("%v\n", x)
}
Since the first iota is declared with the value 3
( + 3
), the next one has the value 4
package main
import (
"fmt"
"sync"
"time"
)
func main() {
var wg sync.WaitGroup
wg.Add(1)
go func() {
time.Sleep(time.Second * 2)
fmt.Println("1")
wg.Done()
}()
go func() {
fmt.Println("2")
}()
wg.Wait()
fmt.Println("3")
}
Output: 2 1 3
package main
import (
"fmt"
)
func mod1(a []int) {
for i := range a {
a[i] = 5
}
fmt.Println("1:", a)
}
func mod2(a []int) {
a = append(a, 125) // !
for i := range a {
a[i] = 5
}
fmt.Println("2:", a)
}
func main() {
s1 := []int{1, 2, 3, 4}
mod1(s1)
fmt.Println("1:", s1)
s2 := []int{1, 2, 3, 4}
mod2(s2)
fmt.Println("2:", s2)
}
Output:
1 [5 5 5 5]
1 [5 5 5 5]
2 [5 5 5 5 5]
2 [1 2 3 4]
In mod1
a is link, and when we're using a[i]
, we're changing s1
value to.
But in mod2
, append
creats new slice, and we're changing only a
value, not s2
.
package main
import (
"container/heap"
"fmt"
)
// An IntHeap is a min-heap of ints.
type IntHeap []int
func (h IntHeap) Len() int { return len(h) }
func (h IntHeap) Less(i, j int) bool { return h[i] < h[j] }
func (h IntHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
func (h *IntHeap) Push(x interface{}) {
// Push and Pop use pointer receivers because they modify the slice's length,
// not just its contents.
*h = append(*h, x.(int))
}
func (h *IntHeap) Pop() interface{} {
old := *h
n := len(old)
x := old[n-1]
*h = old[0 : n-1]
return x
}
func main() {
h := &IntHeap{4, 8, 3, 6}
heap.Init(h)
heap.Push(h, 7)
fmt.Println((*h)[0])
}
Output: 3
MongoDB advantages are as followings:
The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.
db.books.find({"name": /abc/})
db.books.find().sort({x:1})
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Functions vs. Comparisons | Query Improvements | Exercise | Solution |
SQL (Structured Query Language) is a standard language for relational databases (like MySQL, MariaDB, ...).
It's used for reading, updating, removing and creating data in a relational database.
The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.
SQL - Best used when data integrity is crucial. SQL is typically implemented with many businesses and areas within the finance field due to it's ACID compliance.
NoSQL - Great if you need to scale things quickly. NoSQL was designed with web applications in mind, so it works great if you need to quickly spread the same information around to multiple servers
Additionally, since NoSQL does not adhere to the strict table with columns and rows structure that Relational Databases require, you can store different data types together.
For these questions, we will be using the Customers and Orders tables shown below:
Customers
Customer_ID | Customer_Name | Items_in_cart | Cash_spent_to_Date |
---|---|---|---|
100204 | John Smith | 0 | 20.00 |
100205 | Jane Smith | 3 | 40.00 |
100206 | Bobby Frank | 1 | 100.20 |
ORDERS
Customer_ID | Order_ID | Item | Price | Date_sold |
---|---|---|---|---|
100206 | A123 | Rubber Ducky | 2.20 | 2019-09-18 |
100206 | A123 | Bubble Bath | 8.00 | 2019-09-18 |
100206 | Q987 | 80-Pack TP | 90.00 | 2019-09-20 |
100205 | Z001 | Cat Food - Tuna Fish | 10.00 | 2019-08-05 |
100205 | Z001 | Cat Food - Chicken | 10.00 | 2019-08-05 |
100205 | Z001 | Cat Food - Beef | 10.00 | 2019-08-05 |
100205 | Z001 | Cat Food - Kitty quesadilla | 10.00 | 2019-08-05 |
100204 | X202 | Coffee | 20.00 | 2019-04-29 |
Select *
From Customers;
Select Items_in_cart
From Customers
Where Customer_Name = "John Smith";
Select SUM(Cash_spent_to_Date) as SUM_CASH
From Customers;
Select count(1) as Number_of_People_w_items
From Customers
where Items_in_cart > 0;
You would join them on the unique key. In this case, the unique key is Customer_ID in both the Customers table and Orders table
Select c.Customer_Name, o.Item
From Customers c
Left Join Orders o
On c.Customer_ID = o.Customer_ID;
with cat_food as (
Select Customer_ID, SUM(Price) as TOTAL_PRICE
From Orders
Where Item like "%Cat Food%"
Group by Customer_ID
)
Select Customer_name, TOTAL_PRICE
From Customers c
Inner JOIN cat_food f
ON c.Customer_ID = f.Customer_ID
where c.Customer_ID in (Select Customer_ID from cat_food);
Although this was a simple statement, the "with" clause really shines when a complex query needs to be run on a table before joining to another. With statements are nice, because you create a pseudo temp when running your query, instead of creating a whole new table.
The Sum of all the purchases of cat food weren't readily available, so we used a with statement to create the pseudo table to retrieve the sum of the prices spent by each customer, then join the table normally.
SELECT count(*) SELECT count(*)
FROM shawarma_purchases FROM shawarma_purchases
WHERE vs. WHERE
YEAR(purchased_at) == '2017' purchased_at >= '2017-01-01' AND
purchased_at <= '2017-31-12'
SELECT count(*)
FROM shawarma_purchases
WHERE
purchased_at >= '2017-01-01' AND
purchased_at <= '2017-31-12'
When you use a function (YEAR(purchased_at)
) it has to scan the whole database as opposed to using indexes and basically the column as it is, in its natural state.
Components | Services |
---|---|
Compute | Compute Engine |
App Engine | |
Kubernetes Engine | |
Cloud Function | |
Cloud Run | |
Storage & | Cloud Storage |
Database | Cloud SQL |
Cloud BigTable | |
Cloud Spanner | |
Cloud Datastore | |
Networking | VPC |
Load Balancing | |
Cloud Armor | |
Cloud CDN | |
Cloud DNS | |
Cloud Interconnect | |
Big Data | Big Query |
Cloud Dataproc | |
Cloud Datalab | |
Data Studio | |
DevOps | Container Registry |
Cloud Build | |
Source Repository | |
Identity & | Cloud Identity |
Security | Cloud IAM |
Cloud KMS | |
Cloud AI | Cloud AutoML |
Cloud Vision API | |
Natural Language | |
Cloud Speech-to-Text | |
Cloud Text-to-Speech | |
Cloud Translation API | |
Cloud Video Intelligence | |
API Platform | Maps Platform |
API Analytics | |
Apigee Sense | |
Cloud Endpoints |
Virtual Private Cloud(VPC) network is a virtual version of physical network, implemented in Google's internal Network. VPC is a gloabal resource in GCP. Subnetworks(subnets) are regional resources, ie., subnets can be created withinin regions.
VPC are created in 2 modes,
Auto mode VPC - One subnet in each region is created automatically by GCP while creating VPC
Custom mode VPC - No subnets are automatically created. This type of network provides complete control over the subnets creation to the users.
Google Cloud Functions is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Your function is triggered when an event being watched is fired.
Cloud Datastore is a schemaless NoSQL datastore in Google's cloud. Applications can use Datastore to query your data with SQL-like queries that support filtering and sorting. Datastore replicates data across multiple datacenters, which provides a high level of read/write availability.
Network tags allow you to apply firewall rules and routes to a specific instance or set of instances: You make a firewall rule applicable to specific instances by using target tags and source tags.
VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as Google Kubernetes Engine nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.
Enable Flow Logs
Open VPC Network in GCP Console
Click the name of the subnet
Click EDIT button
Set Flow Logs to On
Click Save
$ gsutil ls
$ gcloud alpha storage ls
startap-script
Deployment Manager creates a new deployment.
fun fact: Anthos is flower in greek, they grow in the ground (earth) but need rain from the clouds to flourish.
On GCP the kubernetes api-server is the only control plane component exposed to customers whilst compute engine manages instances in the project.
It is a core component of the Anthos stack which provides platform, service and security operators with a single, unified approach to multi-cluster management that spans both on-premises and cloud environments. It closely follows K8s best practices, favoring declarative approaches over imperative operations, and actively monitors cluster state and applies the desired state as defined in Git. It includes three key components as follows:
It follows common modern software development practices which makes cluster configuration, management and policy changes auditable, revertable, and versionable easily enforcing IT governance and unifying resource management in an organisation.
It is part of the Anthos stack that brings a serverless container experience to Anthos, offering a high-level platform experience on top of K8s clusters. It is built with Knative, an open-source operator for K8s that brings serverless application serving and eventing capabilities.
Platform teams in organisations that wish to offer developers additional tools to test, deploy and run applications can use Knative to enhance this experience on Anthos as Cloud Run. Below are some of the benefits;
As it does not support stateful applications or sticky sessions, it is suitable for running stateless applications such as:
You can read about TripleO right here
There are many reasons for that. One for example: you can't remove router if there are active ports assigned to it.
Not by default. Object Storage API limits the maximum to 5GB per object but it can be adjusted.
False. Two objects can have the same name if they are in different containers.
Using:
A list of services and their endpoints
The Elastic Stack consists of:
Elasticserach, Logstash and Kibana are also known as the ELK stack.
From the official docs:
"Elasticsearch is a distributed document store. Instead of storing information as rows of columnar data, Elasticsearch stores complex data structures that have been serialized as JSON documents"
From the blog:
"Logstash is a powerful, flexible pipeline that collects, enriches and transports data. It works as an extract, transform & load (ETL) tool for collecting log messages."
Beats are lightweight data shippers. These data shippers installed on the client where the data resides.
Examples of beats: Filebeat, Metricbeat, Auditbeat. There are much more.
From the official docs:
"Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps."
The process may vary based on the chosen architecture and the processing you may want to apply to the logs. One possible workflow is:
This is where data is stored and also where different processing takes place (e.g. when you search for a data).
Par of a master node responsibilites:
While there can be multiple master nodes in reality only of them is the elected master node.
A node which responsible for parsing the data. In case you don't use logstash then this node can recieve data from beats and parse it, similarly to how it can be parsed in Logstash.
A Coordinating node responsible for routing requests out and in to the cluser (data nodes).
Index in Elastic is in most cases compared to a whole database from the SQL/NoSQL world.
You can choose to have one index to hold all the data of your app or have multiple indices where each index holds different type of your app (e.g. index for each service your app is running).
The official docs also offer a great explanation (in general, it's really good documentation, as every project should have):
"An index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data"
An index is split into shards and documents are hashed to a particular shard. Each shard may be on a different node in a cluster and each one of the shards is a self contained index.
This allows Elasticsearch to scale to an entire cluster of servers.
From the official docs:
"An inverted index lists every unique word that appears in any document and identifies all of the documents each word occurs in."
Continuing with the comparison to SQL/NoSQL a Document in Elastic is a row in table in the case of SQL or a document in a collection in the case of NoSQL. As in NoSQL a Document is a JSON object which holds data on a unit in your app. What is this unit depends on the your app. If your app related to book then each document describes a book. If you are app is about shirts then each document is a shirt.
Red means some data is unavailable. Yellow can be caused by running single node cluster instead of multi-node.
False. From the official docs:
"Each indexed field has a dedicated, optimized data structure. For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees."
In a network/cloud environment where failures can be expected any time, it is very useful and highly recommended to have a failover mechanism in case a shard/node somehow goes offline or disappears for whatever reason. To this end, Elasticsearch allows you to make one or more copies of your index’s shards into what are called replica shards, or replicas for short.
Term Frequency is how often a term appears in a given document and Document Frequency is how often a term appears in all documents. They both are used for determining the relevance of a term by calculating Term Frequency / Document Frequency.
"The index is actively being written to". More about the phases here
curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H 'Content-Type: application/json' -d'{ "name": "John Doe" }'
It creates customer index if it doesn't exists and adds a new document with the field name which is set to "John Dow". Also, if it's the first document it will get the ID 1.
Bulk API is used when you need to index multiple documents. For high number of documents it would be significantly faster to use rather than individual requests since there are less network roundtrips.
From the official docs:
"In the query context, a query clause answers the question “How well does this document match this query clause?” Besides deciding whether or not the document matches, the query clause also calculates a relevance score in the _score meta-field."
"In a filter context, a query clause answers the question “Does this document match this query clause?” The answer is a simple Yes or No — no scores are calculated. Filter context is mostly used for filtering structured data"
There are several possible answers for this question. One of them is as follows:
A small-scale architecture of elastic will consist of the elastic stack as it is. This means we will have beats, logstash, elastcsearch and kibana.
A production environment with large amounts of data can include some kind of buffering component (e.g. Reddis or RabbitMQ) and also security component such as Nginx.
A logstash plugin which modifies information in one format and immerse it in another.
The raw data as it is stored in the index. You can search and filter it.
Total number of documents matching the search results. If not query used then simply the total number of documents.
"Visualize" is where you can create visual representations for your data (pie charts, graphs, ...)
False. One harvester harvests one file.
You can generate certificates with the provided elastic utils and change configuration to enable security using certificates model.
According to Martin Kleppmann:
"Many processes running on many machines...only message-passing via an unreliable network with variable delays, and the system may suffer from partial failures, unreliable clocks, and process pauses."
Another definition: "Systems that are physically separated, but logically connected"
According to the CAP theorem, it's not possible for a distributed data store to provide more than two of the following at the same time:
Ways to improve:
It's an architecture in which data is and retrieved from a single, non-shared, source usually exclusively connected to one node as opposed to architectures where the request can get to one of many nodes and the data will be retrieved from one shared location (storage, memory, ...).
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Highly Available "Hello World" | Exercise | Solution |
TODO: add more details!
I like this definition from blog.christianposta.com:
"An explicitly and purposefully defined interface designed to be invoked over a network that enables software developers to get programmatic access to data and functionality within an organization in a controlled and comfortable way."
From swagger.io:
"An API specification provides a broad understanding of how an API behaves and how the API links with other APIs. It explains how the API functions and the results to expect when using the API"
False. From swagger.io:
"An API definition is similar to an API specification in that it provides an understanding of how an API is organized and how the API functions. But the API definition is aimed at machine consumption instead of human consumption of APIs."
Automation is the act of automating tasks to reduce human intervention or interaction in regards to IT technology and systems.
While automation focuses on a task level, Orchestration is the process of automating processes and/or workflows which consists of multiple tasks that usually across multiple systems.
Data about data. Basically, it describes the type of information that an underlying data will hold.
I can't answer this for you :)
Domain Specific Language (DSLs) are used to create a customised language that represents the domain such that domain experts can easily interpret it.
Data serialization language used by many technologies today like Kubernetes, Ansible, etc.
True. Because YAML is superset of JSON.
{
applications: [
{
name: "my_app",
language: "python",
version: 20.17
}
]
}
applications:
- app: "my_app"
language: "python"
version: 20.17
someMultiLineString: |
look mama
I can write a multi-line string
I love YAML
It's good for use cases like writing a shell script where each line of the script is a different command.
someMultiLineString: |
to someMultiLineString: >
?using >
will make the multi-line string to fold into a single line
someMultiLineString: >
This is actually
a single line
do not let appearances fool you
They allow you reference values instead of directly writing them and it is used like this:
username: {{ my.user_name }}
Using this: ---
For Examples:
document_number: 1
---
document_number: 2
Wikipedia: "In computing, firmware is a specific class of computer software that provides the low-level control for a device's specific hardware. Firmware, such as the BIOS of a personal computer, may contain basic functions of a device, and may provide hardware abstraction services to higher-level software such as operating systems."
False. It doesn't maintain state for incoming request.
It consists of:
HTTP is stateless. To share state, we can use Cookies.
TODO: explain what is actually a Cookie
The server didn't receive a response from another server it communicates with in a timely manner.
Wikipedia: "The X-Forwarded-For (XFF) HTTP header field is a common method for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer."
A load balancer accepts (or denies) incoming network traffic from a client, and based on some criteria (application related, network, etc.) it distributes those communications out to servers (at least one).
L4 and L7
Yes, you can use DNS for performing load balancing.
Recommended read:
Cons:
You would like to make sure the user doesn't lose the current session data.
Cookies. There are application based cookies and duration based cookies.
The maximum timeout value can be set between 1 and 3,600 seconds on both GCP and AWS.
In Copyleft, any derivative work must use the same licensing while in permissive licensing there are no such condition. GPL-3 is an example of copyleft license while BSD is an example of permissive license.
SSH HTTP DHCP DNS ...
Pros:
Pros:
Local filesystem Dropbox Google Drive
A list of questions you as a candidate can ask the interviewer during or after the interview. These are only a suggestion, use them carefully. Not every interviewer will be able to answer these (or happy to) which should be perhaps a red flag warning for your regarding working in such place but that's really up to you.
Be careful when asking this question - all companies, regardless of size, have some level of tech debt.
Phrase the question in the light that all companies have the deal with this, but you want to see the current
pain points they are dealing with
This is a great way to figure how managers deal with unplanned work, and how good they are at setting expectations with projects.
This can give you insights in some of the cool projects a company is working on, and if you would enjoy working on projects like these. This is also a good way to see if the managers are allowing employees to learn and grow with projects outside of the normal work you'd do.
Similar to the tech debt question, this helps you identify any pain points with the company.
Additionally, it can be a great way to show how you'd be an asset to the team.
For Example, if they mention they have problem X, and you've solved that in the past, you can show how you'd be able to mitigate that problem.
Not only this will tell you what is expected from you, it will also provide big hint on the type of work you are going to do in the first months of your job.
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Message Board Tables | Relational DB Tables | Exercise | Solution |
ACID stands for Atomicity, Consistency, Isolation, Durability. In order to be ACID compliant, the database must meet each of the four criteria
Atomicity - When a change occurs to the database, it should either succeed or fail as a whole.
For example, if you were to update a table, the update should completely execute. If it only partially executes, the update is considered failed as a whole, and will not go through - the DB will revert back to it's original state before the update occurred. It should also be mentioned that Atomicity ensures that each transaction is completed as it's own stand alone "unit" - if any part fails, the whole statement fails.
Consistency - any change made to the database should bring it from one valid state into the next.
For example, if you make a change to the DB, it shouldn't corrupt it. Consistency is upheld by checks and constraints that are pre-defined in the DB. For example, if you tried to change a value from a string to an int when the column should be of datatype string, a consistent DB would not allow this transaction to go through, and the action would not be executed
Isolation - this ensures that a database will never be seen "mid-update" - as multiple transactions are running at the same time, it should still leave the DB in the same state as if the transactions were being run sequentially.
For example, let's say that 20 other people were making changes to the database at the same time. At the time you executed your query, 15 of the 20 changes had gone through, but 5 were still in progress. You should only see the 15 changes that had completed - you wouldn't see the database mid-update as the change goes through.
Durability - Once a change is committed, it will remain committed regardless of what happens (power failure, system crash, etc.). This means that all completed transactions must be recorded in non-volatile memory.
Note that SQL is by nature ACID compliant. Certain NoSQL DB's can be ACID compliant depending on how they operate, but as a general rule of thumb, NoSQL DB's are not considered ACID compliant
Sharding is a horizontal partitioning.
Are you able to explain what is it good for?
Not much information provided as to why it became a bottleneck and what is current architecture, so one general approach could be
to reduce the load on your database by moving frequently-accessed data to in-memory structure.
Connection Pool is a cache of database connections and the reason it's used is to avoid an overhead of establishing a connection for every query done to a database.
A connection leak is a situation where database connection isn't closed after being created and is no longer needed.
"A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of organisation's decision-making process"
A database index is a data structure that improves the speed of operations in a table. Indexes can be created using one or more columns, providing the basis for both rapid random lookups and efficient ordering of access to records.
Data that is used multiple times in a database should be stored once and referenced with a foreign key.
This has the clear benefit of ease of maintenance where you need to change a value only in a single place to change it everywhere.
Primary Key: each row in every table should a unique identifier that represents the row.
Foreign Key: a reference to another table's primary key. This allows you to join table together to retrieve all the information you need without duplicating data.
Wikipedia: "is a programming technique for converting data between incompatible type systems using object-oriented programming languages"
In regards to the relational databases:
Wikipedia: "In the context of SQL, data definition or data description language (DDL) is a syntax for creating and modifying database objects such as tables, indices, and users."
Given a text file, perform the following exercises
Bonus: extract the last word of each line
CDN (Content Delivery Network) responsible for distributing content geographically. Part of it, is what is known as edge locations, aka cache proxies, that allows users to get their content quickly due to cache features and geographical distribution.
In single CDN, the whole content is originated from content delivery network.
In multi-CDN, content is distributed across multiple different CDNs, each might be on a completely different provider/cloud.
The ability easily grow in size and capacity based on demand and usage.
The ability to grow but also to reduce based on what is required
Fault Tolerance - The ability to self-heal and return to normal capacity. Also the ability to withstand a failure and remain functional.
High Availability - Being able to access a resource (in some use cases, using different platforms)
wintellect.com: "High availability, simply put, is eliminating single points of failure and disaster recovery is the process of getting a system back to an operational state when a system is rendered inoperative. In essence, disaster recovery picks up when high availability fails, so HA first."
Vertical Scaling is the process of adding resources to increase power of existing servers. For example, adding more CPUs, adding more RAM, etc.
With vertical scaling alone, the component still remains a single point of failure. In addition, it has hardware limit where if you don't have more resources, you might not be able to scale vertically.
Databases, cache. It's common mostly for non-distributed systems.
Horizontal Scaling is the process of adding more resources that will be able handle requests as one unit
A load balancer. You can add more resources, but if you would like them to be part of the process, you have to serve them the requests/responses. Also, data inconsistency is a concern with horizontal scaling.
The load on the producers or consumers may be high which will then cause them to hang or crash.
Instead of working in "push mode", the consumers can pull tasks only when they are ready to handle them. It can be fixed by using a streaming platform like Kafka, Kinesis, etc. This platform will make sure to handle the high load/traffic and pass tasks/messages to consumers only when the ready to get them.
You can mention:
roll-back & roll-forward cut over dress rehearsals DNS redirection
Additional exercises can be found in system-design-notebook repository.
A central processing unit (CPU) performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program. This contrasts with external components such as main memory and I/O circuitry, and specialized processors such as graphics processing units (GPUs).
RAM (Random Access Memory) is the hardware in a computing device where the operating system (OS), application programs and data in current use are kept so they can be quickly reached by the device's processor. RAM is the main memory in a computer. It is much faster to read from and write to than other kinds of storage, such as a hard disk drive (HDD), solid-state drive (SSD) or optical drive.
An embedded system is a computer system - a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts.
Raspberry Pi
As defined by Doug Laney:
DataOps seeks to reduce the end-to-end cycle time of data analytics, from the origin of ideas to the literal creation of charts, graphs and models that create value. DataOps combines Agile development, DevOps and statistical process controls and applies them to data analytics.
An answer from talend.com:
"Data architecture is the process of standardizing how organizations collect, store, transform, distribute, and use data. The goal is to deliver relevant data to people who need it, when they need it, and help them make sense of it."
Wikipedia's explanation on Data Warehouse Amazon's explanation on Data Warehouse
Responsible for managing the compute resources in clusters and scheduling users' applications
A programming model for large-scale data processing
In general, Packer automates machine images creation. It allows you to focus on configuration prior to deployment while making the images. This allows you start the instances much faster in most cases.
A configuration->deployment which has some advantages like:
If you are looking for a way to prepare for a certain exam this is the section for you. Here you'll find a list of certificates, each references to a separate file with focused questions that will help you to prepare to the exam. Good luck :)
Thanks to all of our amazing contributors who make it easy for everyone to learn new things :)
Logos credits can be found here