π΄ More than ~3877 Full Stack, Coding & System Design Interview Questions And Answers sourced from all around the Internet to help you to prepare for an interview, conduct one, mock your lead dev or completely ignore. Find more questions and answers on π
FullStack.Cafe is a biggest hand-picked collection of most common coding interview questions for software developers. Prepare for your next programming interview in no time and get 6-figure job offer fast!
π΄ Get All Answers + PDFs on FullStack.Cafe - Kill Your Tech & Coding Interview
π For 1299 ML & DataScience Interview Questions Check MLStack.Cafe - Kill Your Machine Learning, Data Science & Python Interview
Answer: An array is a collection of homogeneous (same type) data items stored in contiguous memory locations. For example if an array is of type βintβ, it can only store integer elements and cannot allow the elements of other types such as double, float, char etc. The elements of an array are accessed by using an index.
O(1)
O(log n)
O(n)
O(n log n)
O(n2)
O(2n)
O(n!)
Source: beginnersbook.com
Answer: Arrays are:
Source: codelack.com
Answer: Pros:
O(1)
time, regardless of the length of the array.O(1)
time.Cons:
O(n)
time.Source: www.interviewcake.com
Answer:
Array uses continuous memory locations (space complexity O(n)
) to store the element so retrieving of any element will take O(1)
time complexity (constant time by using index of the retrieved element). O(1)
describes inserting at the end of the array. However, if you're inserting into the middle of an array, you have to shift all the elements after that element, so the complexity for insertion in that case is O(n)
for arrays. End appending also discounts the case where you'd have to resize an array if it's full.
Operation | Average Case | Worst Case |
---|---|---|
Read | O(1) |
O(1) |
Append | O(1) |
O(1) |
Insert | O(n) |
O(n) |
Delete | O(n) |
O(n) |
Search | O(n) |
O(n) |
Source: beginnersbook.com
Answer: Arrays and dictionaries both store collections of data, but differ by how they are accessed. Arrays provide random access of a sequential set of data. Dictionaries (or associative arrays) provide a map from a set of keys to a set of values.
This makes array/lists more suitable when you have a group of objects in a set (prime numbers, colors, students, etc.). Dictionaries are better suited for showing relationships between a pair of objects.
Source: stackoverflow.com
Answer: A dynamic array is an array with a big improvement: automatic resizing.
One limitation of arrays is that they're fixed size, meaning you need to specify the number of elements your array will hold ahead of time. A dynamic array expands as you add more elements. So you don't need to determine the size ahead of time.
Source: www.interviewcake.com
Answer: A simple dynamic array can be constructed by allocating an array of fixed-size, typically larger than the number of elements immediately required. The elements of the dynamic array are stored contiguously at the start of the underlying array, and the remaining positions towards the end of the underlying array are reserved, or unused. Elements can be added at the end of a dynamic array in constant time by using the reserved space until this space is completely consumed.
When all space is consumed, and an additional element is to be added, the underlying fixed-sized array needs to be increased in size. Typically resizing is expensive because you have to allocate a bigger array and copy over all of the elements from the array you have overgrow before we can finally append our item.
Dynamic arrays memory allocation is language specific. For example in C++ arrays are created on the stack, and have automatic storage duration -- you don't need to manually manage memory, but they get destroyed when the function they're in ends. They necessarily have a fixed size:
int foo[10];
Arrays created with operator new[]
have dynamic storage duration and are stored on the heap (technically the "free store"). They can have any size, but you need to allocate and free them yourself since they're not part of the stack frame:
int* foo = new int[10];
delete[] foo;
Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(1)
? ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: Backtracking is a systematic way of trying out different sequences of decisions until we find one that "works." Backtracking does not generate all possible solutions first and checks later. It tries to generate a solution and as soon as even one constraint fails, the solution is rejected and the next solution is tried.
Backtracking can be understood as as searching os a tree for a particular "goal" leaf node. Backtracking in that case is a depth-first search with any bounding function. All solution using backtracking is needed to satisfy a complex set of constraints.
Source: www.javatpoint.com
Answer:
P.S. * Opportunistic decision making refers to a process where a person or group assesses alternative actions made possible by the favorable convergence of immediate circumstances recognized without reference to any general plan.
Source: www.quora.com
Answer:
Like when looking for a book because at first you check drawers in the first room, but it's not found, so you backtrack out of the first room to check the drawers in the next room. It's also called Trial & Error.
Source: stackoverflow.com
Answer: Exhaustive Search is an algorithmic technique in which first all possible solutions are generated first and then we select the most feasible solution by applying some rules. Since it follows the most naive approach, it is a.k.a Brute-Force Search. This approach is one of the most expensive algorithmic techniques, mainly in terms of time complexity. It is also, therefore, one of the most naive ones.
Source: afteracademy.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: Big-O notation (also called "asymptotic growth" notation) is a relative representation of the complexity of an algorithm. It shows how an algorithm scales based on input size. We use it to talk about how thing scale. Big O complexity can be visualized with this graph:
Source: stackoverflow.com
O(1)
algorithm βAnswer:
Say we have an array of n
elements:
int array[n];
If we wanted to access the first (or any) element of the array this would be O(1)
since it doesn't matter how big the array is, it always takes the same constant time to get the first item:
x = array[0];
Source: stackoverflow.com
Answer: Big-O is often used to make statements about functions that measure the worst case behavior of an algorithm. Worst case analysis gives the maximum number of basic operations that have to be performed during execution of the algorithm. It assumes that the input is in the worst possible state and maximum work has to be done to put things right.
Source: stackoverflow.com
O(log n)
? ββAnswer:
O(log n)
means for every element, you're doing something that only needs to look at log N of the elements. This is usually because you know something about the elements that let you make an efficient choice (for example to reduce a search space).
The most common attributes of logarithmic running-time function are that:
or
n
Most efficient sorts are an example of this, such as merge sort. βIt is O(log n)
when we do divide and conquer type of algorithms e.g binary search. Another example is quick sort where each time we divide the array into two parts and each time it takes O(N)
time to find a pivot element. Hence it N O(log N)
Plotting log(n)
on a plain piece of paper, will result in a graph where the rise of the curve decelerates as n
increases:
Source: stackoverflow.com
Answer: The fact is it's difficult to determine the exact runtime of an algorithm. It depends on the speed of the computer processor. So instead of talking about the runtime directly, we use Big O Notation to talk about how quickly the runtime grows depending on input size.
With Big O Notation, we use the size of the input, which we call n
. So we can say things like the runtime grows βon the order of the size of the inputβ (O(n)
) or βon the order of the square of the size of the inputβ (O(n2)
). Our algorithm may have steps that seem expensive when n
is small but are eclipsed eventually by other steps as n
gets larger. For Big O Notation analysis, we care more about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as n
gets very large.
Source: medium.com
O(n2)
operation do? ββAnswer:
O(n2)
means for every element, you're doing something with every other element, such as comparing them. Bubble sort is an example of this.
Source: stackoverflow.com
Details: Let's say we wanted to find a number in the list:
for (int i = 0; i < n; i++){
if(array[i] == numToFind){ return i; }
}
What will be the time complexity (Big O) of that code snippet?
Answer:
This would be O(n)
since at most we would have to look through the entire list to find our number. The Big-O is still O(n)
even though we might find our number the first try and run through the loop once because Big-O describes the upper bound for an algorithm.
Source: stackoverflow.com
push
and pop
for a Stack implemented using a LinkedList? ββAnswer:
O(1)
. Note, you don't have to insert at the end of the list. If you insert at the front of a (singly-linked) list, they are both O(1)
.
Stack contains 1,2,3:
[1]->[2]->[3]
Push 5:
[5]->[1]->[2]->[3]
Pop:
[1]->[2]->[3] // returning 5
Source: stackoverflow.com
O(1)
vs O(n)
space complexities ββAnswer: Let's consider a traversal algorithm for traversing a list.
O(1)
denotes constant space use: the algorithm allocates the same number of pointers irrespective to the list size. That will happen if we move (reuse) our pointer along the list.O(n)
denotes linear space use: the algorithm space use grows together with respect to the input size n
. That will happen if let's say for some reason the algorithm needs to allocate 'N' pointers (or other variables) when traversing a list.Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(n!)
? ββββRead answer on π FullStack.Cafe
O(ck)
? ββββRead answer on π FullStack.Cafe
O(1)
, O(n log n)
and O(log n)
complexities? ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: A normal tree has no restrictions on the number of children each node can have. A binary tree is made of nodes, where each node contains a "left" pointer, a "right" pointer, and a data element.
There are three different types of binary trees:
Source: study.com
Answer: Binary search tree is a data structure that quickly allows to maintain a sorted list of numbers.
O(log n)
time.The properties that separates a binary search tree from a regular binary tree are:
Source: www.programiz.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer:
Bit stands for Binary Digit and is the smallest unit of data in a computer. Binary digits can only be 0
or 1
because they are a 2- base
number. This means that if we want to represent the number 5
in binary we would have to write 0101
.
An integer variable usually have a 32 bits limit, which means it consists of 32 bits with its range of 2Β³Β² (2 derived from the state of bits β 0 or 1 which is 2 possibilities).
Answer: A byte is made up of 8 bits and the highest value of a byte is 255, which would mean every bit is set.
Now:
1 Byte ( 8 bits ) | ||||||||||
Place Value | 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 | ||
Β |
0
|
0
|
0
|
1
|
0
|
1
|
1
|
0
|
=
|
22
|
Lets take it right to left and add up all those values together:
128 * 0
+ 64 * 0
+ 32 * 0
+ 16 * 1
+ 8 * 0
+ 4 * 1
+ 2 * 1
+ 1 * 0
= 22
Source: github.com
Answer: A byte is made up of 8 bits and the highest value of a byte is 255, which would mean every bit is set. We will look at why a byte's maximum value is 255 in just a second.
So if all bits are set and the value = 255 my byte would look like this:
1 Byte ( 8 bits ) | ||||||||||
Place Value | 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 | ||
Β |
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
=
|
255 |
Source: github.com
Answer:
AND|0 1 OR|0 1
---+---- ---+----
0|0 0 0|0 1
1|0 1 1|1 1
XOR|0 1 NOT|0 1
---+---- ---+---
0|0 1 |1 0
1|1 0
Left Shift ( << ): Left shift operator is a binary operator which shift the some number of bits, in the given bit pattern, to the left and append 0 at the end.
Signed Right Shift ( >> ): Right shift operator is a binary operator which shift the some number of bits, in the given bit pattern, to the right, preserving the sign (which is the first bit)
Zero Fill Right Shift ( >>> ): Shifts right by pushing zeros in from the left, filling in the left bits with 0s
Source: www.hackerearth.com
Answer: Bitwise operators are used for manipulating a data at the bit level, also called as bit level programming. It is a fast and simple action, directly supported by the processor, and is used to manipulate values for comparisons and calculations.
On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition.
Source: en.wikipedia.org
Details: I have to flip all bits in a binary representation of an integer. Given:
10101
The output should be
01010
What is the bitwise operator to accomplish this when used with an integer?
Answer:
Simply use the bitwise not operator ~
.
int flipBits(int n) {
return ~n;
}
Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
>>
and >>>
operators? βββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: Blockchain is a secure distributed ledger (data structure or database) that maintains a continuously growing list of ordered records, called βblocksβ, that are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data.
By design, a blockchain is resistant to modification of the data. It is "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way".
Once recorded, the data in any given block cannot be altered retroactively without alteration of all subsequent blocks, which requires consensus of the network majority.
Source: en.wikipedia.org
Answer: Transactions are the things that give a blockchain purpose. They are the smallest building blocks of a blockchain system. Transactions generally consist of:
This is not too different from a standard transaction that you would find on a credit card statement.
A transaction changes the state of the agreed-correct blockchain. A blockchain is a shared, decentralized, distributed state machine. This means that all nodes (users of the blockchain system) independently hold their own copy of the blockchain, and the current known "state" is calculated by processing each transaction in order as it appears in the blockchain.
Source: pluralsight.com
Answer: Blockchains are composed of three core parts:
Source: dummies.com
Answer: Basically the blockchain data structure is explained as a back-linked record of blocks of transactions, which is ordered. It can be saved as a file or in a plain database. Each block can be recognized by a hash, created utilizing the SHA256 cryptographic hash algorithm on the header of the block. Each block mentions a former block, also identified as the parent block, in the βprevious block hashβ field, in the block header.
Source: cryptoticker.io
Answer: A blockchain exists out of blocks of data. These blocks of data are stored on nodes (compare it to small servers). Nodes can be any kind of device (mostly computers, laptops or even bigger servers). Nodes form the infrastructure of a blockchain.
All nodes on a blockchain are connected to each other and they constantly exchange the latest blockchain data with each other so all nodes stay up to date. They store, spread and preserve the blockchain data, so theoretically a blockchain exists on nodes.
A full node is basically a device (like a computer) that contains a full copy of the transaction history of the blockchain.
Source: lisk.io
Answer: The **first block in any blockchain **is termed the genesis block. If you start at any block and follow the chain backwards chronologically, you will arrive at the genesis block. The genesis block is statically encoded within the client software, that it cannot be changed. Every node can identify the genesis blockβs hash and structure, the fixed time of creation, and the single transactions within. Thus every node has a secure βrootβ from which is possible to build a trusted blockchain on.
Source: linkedin.com
Answer: A proof of work is a piece of data which is difficult (costly, time-consuming) to produce but easy for others to verify and which satisfies certain requirements. Producing a proof of work can be a random process with low probability so that a lot of trial and error is required on average before a valid proof of work is generated. Difficulty is a measure of how difficult it is to find a hash below a given target.
Source: en.bitcoin.it
Answer: Tokens/Coins are used as a medium of exchange between the states. They are digital assets built in to perform a specific function within a blockchain.
When someone does a transaction, there is a change of state, and coins are moved from one address to another address. Apart from that, transactions contain some additional data; this data can be mutated through the change of state. For this reason, blockchains need coins or tokens to incentivize the participants to join their networks.
Source: mindmajix.com
Answer: If A + B = C, then no matter what the circumstances, A+B will always be equal to C. That is called deterministic behavior.
Hash functions are deterministic, meaning Aβs hash will always be H(A).
Source: blockgeeks.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: You can swap two numbers without using a temporary or third variable if you can store the sum of numbers in one number and then minus the sum with other number, something like:
a = 3;
b = 5;
a = a + b; //8
b = a β b; // 3
a = a β b; //5
now you have a = 5
and b = 3
, so numbers are swapped without using a third or temp variable.
Source: javarevisited.blogspot.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: Dynamic programming is all about ordering your computations in a way that avoids recalculating duplicate work. More specifically, Dynamic Programming is a technique used to avoid computing multiple times the same subproblem in a recursive algorithm. DP algorithms could be implemented with recursion, but they don't have to be.
With dynamic programming, you store your results in some sort of table generally. When you need the answer to a problem, you reference the table and see if you already know what it is. If not, you use the data in your table to give yourself a stepping stone towards the answer.
There are two approaches to apply Dynamic Programming:
TOP of the tree
fib(4)
fib(3)...................... + fib(2)
fib(2)......... + fib(1) fib(1)........... + fib(0)
fib(1) + fib(0) fib(1) fib(1) fib(0)
fib(1) fib(0)
BOTTOM of the tree
Source: stackoverflow.com
Answer:
Source: stackoverflow.com
Answer: The key idea of DP is to save answers of overlapping smaller sub-problems to avoid recomputation. For that:
Source: stackoverflow.com
Answer: Pros:
Cons:
fib(106)
), you will run out of stack space, because each delayed computation must be put on the stack, and you will have 106
of them.Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: The Fibonacci Sequence is the series of numbers:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
Fibonacci sequence characterized by the fact that every number after the first two is the sum of the two preceding ones:
Fibonacci(0) = 0,
Fibonacci(1) = 1,
Fibonacci(n) = Fibonacci(n-1) + Fibonacci(n-2)
Fibonacci sequence, appears a lot in nature. Patterns such as spirals of shells, curve of waves, seed heads, pinecones, and branches of trees can all be described using this mathematical sequence. The fact that things as large as spirals of galaxies, and as small as DNA molecules follow the Golden Ratio rule suggests that Fibonacci sequence is one of the most fundamental characteristics of the Universe.
Source: www.mathsisfun.com
Answer:
When we take any two successive (one after the other) Fibonacci Numbers, their ratio is very close to the Golden Ratio Ο
which is approximately 1.618034...
. In fact, the bigger the pair of Fibonacci Numbers, the closer the approximation. Let us try a few:
3/2 = 1.5
5/3 = 1.666666666...
...
233/377 = 1.618055556...
This also works when we pick two random whole numbers to begin the sequence, such as 192 and 16 (we get the sequence 192, 16, 208, 224, 432, 656, 1088, 1744, 2832, 4576, 7408, 11984, 19392, 31376, ...):
16/192 = 0.08333333...
208/16 = 13
...
11984/7408 = 1.61771058...
19392/11984 = 1.61815754...
Source: www.mathsisfun.com
O(n)
time ββAnswer: The easiest solution that comes to mind here is iteration:
function fib(n){
let arr = [0, 1];
for (let i = 2; i < n + 1; i++){
arr.push(arr[i - 2] + arr[i -1])
}
return arr[n]
}
And output:
fib(4)
=> 3
Notice that two first numbers can not really be effectively generated by a for loop, because our loop will involve adding two numbers together, so instead of creating an empty array we assign our arr variable to [0, 1]
that we know for a fact will always be there. After that we create a loop that starts iterating from i = 2 and adds numbers to the array until the length of the array is equal to n + 1
. Finally, we return the number at n index of array.
Source: medium.com
Answer: Recursive solution looks pretty simple (see code).
Letβs look at the diagram that will help you understand whatβs going on here with the rest of our code. Function fib is called with argument 5:
Basically our fib function will continue to recursively call itself creating more and more branches of the tree until it hits the base case, from which it will start summing up each branchβs return values bottom up, until it finally sums them all up and returns an integer equal to 5.
Source: medium.com
O(n)
time and O(1)
space complexity βββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(log n)
time using Matrix Exponentiation ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(1)
time? βββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer:
A graph is a common data structure that consists of a finite set of nodes (or vertices) and a set of edges connecting them. A pair (x,y)
is referred to as an edge, which communicates that the x vertex connects to the y vertex.
Graphs are used to solve real-life problems that involve representation of the problem space as a network. Examples of networks include telephone networks, circuit networks, social networks (like LinkedIn, Facebook etc.).
Source: www.educative.io
Answer: Graph:
Tree:
Source: stackoverflow.com
Answer:
E
edges.[ [0,1], [0,6], [0,8], [1,4], [1,6], [1,9], [2,4], [2,6], [3,4], [3,5], [3,8], [4,5], [4,9], [7,8], [7,9] ]
(i, j)
is in the graph by looking at graph[i][j]
value. For a sparse graph, the adjacency matrix is mostly 0s
, and we use lots of space to represent only a few edges. For an undirected graph, the adjacency matrix is symmetric.[ [0, 1, 0, 0, 0, 0, 1, 0, 1, 0],
[1, 0, 0, 0, 1, 0, 1, 0, 0, 1],
[0, 0, 0, 0, 1, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 0, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 0, 0, 0, 1],
[0, 0, 0, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1],
[1, 0, 0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 1, 0, 0] ]
i
, store an array of the vertices adjacent to it (or array of tuples for weighted graph). To find out whether an edge (i,j)
is present in the graph, we go to i's
adjacency list in constant time and then look for j
in i's
adjacency list.[ [1, 6, 8], // 0
[0, 4, 6, 9], // 1
[4, 6], // 2
[4, 5, 8],
[1, 2, 3, 5, 9],
[3, 4],
[0, 1, 2],
[8, 9],
[0, 3, 7],
[1, 4, 7] ] // N
Source: www.khanacademy.org
Read answer on π FullStack.Cafe
A*
Search? βββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(V+E)
? ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
A*
Search and how to calculate one? ββββRead answer on π FullStack.Cafe
A*
Search? βββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: We call algorithms greedy when they utilise the greedy property. The greedy property is:
At that exact moment in time, what is the optimal choice to make?
Greedy algorithms are greedy. They do not look into the future to decide the global optimal solution. They are only concerned with the optimal solution locally. This means that the overall optimal solution may differ from the solution the greedy algorithm chooses.
They never look backwards at what they've done to see if they could optimise globally. This is the main difference between Greedy Algorithms and Dynamic Programming.
Source: skerritt.blog
Answer: Greedy algorithms are quick. A lot faster than the two other alternatives (Divide & Conquer, and Dynamic Programming). They're used because they're fast. Sometimes, Greedy algorithms give the global optimal solution every time. Some of these algorithms are:
These algorithms are Greedy, and their Greedy solution gives the optimal solution.
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: A hash table (hash map) is a data structure that implements an associative array abstract data type, a structure that can map keys to values. Hash tables implement an associative array, which is indexed by arbitrary objects (keys). A hash table uses a hash function to compute an index, also called a hash value, into an array of buckets or slots, from which the desired value can be found.
Source: en.wikipedia.org
Answer: Hashing is the practice of using an algorithm (or hash function) to map data of any size to a fixed length. This is called a hash value (or sometimes hash code or hash sums or even a hash digest if youβre feeling fancy). In hashing, keys are converted into hash values or indexes by using hash functions. Hashing is a one-way function.
Source: www.thesslstore.com
Answer: A Hash Value (also called as Hashes or Checksum) is a string value (of specific length), which is the result of calculation of a Hashing Algorithm. Hash Values have different uses:
Source: www.omnisecu.com
Answer:
The space complexity of a datastructure indicates how much space it occupies in relation to the amount of elements it holds. For example a space complexity of O(1)
would mean that the datastructure alway consumes constant space no matter how many elements you put in there. O(n)
would mean that the space consumption grows linearly with the amount of elements in it.
A hashtable typically has a space complexity of O(n)
.
Source: stackoverflow.com
Answer: A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. The values are used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter storage addressing.
Mathematically speaking, a hash function is usually defined as a mapping from the universe of elements you want to store in the hash table to the range {0, 1, 2, .., numBuckets - 1}
.
Some properties of Hash Functions are:
Source: en.wikipedia.org
Answer: To detect if a list is cyclic, we can check whether a node had been visited before. A natural way is to use a hash table.
Algorithm
We go through each node one by one and record each node's reference (or memory address) in a hash table. If the current node is null
, we have reached the end of the list and it must not be cyclic. If current nodeβs reference is in the hash table, then return true.
Source: leetcode.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: A Heap is a special Tree-based data structure which is an almost complete tree that satisfies the heap property:
A common implementation of a heap is the binary heap, in which the tree is a binary tree.
Source: www.geeksforgeeks.org
Answer: A priority queue is a data structure that stores priorities (comparable values) and perhaps associated information. A priority queue is different from a "normal" queue, because instead of being a "first-in-first-out" data structure, values come out in order by priority. Think of a priority queue as a kind of bag that holds priorities. You can put one in, and you can take out the current highest priority.
Source: pages.cs.wisc.edu
Answer: A binary heaps are commonly implemented with an array. Any binary tree can be stored in an array, but because a binary heap is always a complete binary tree, it can be stored compactly. No space is required for pointers; instead, the parent and children of each node can be found by arithmetic on array indices:
0
(2*i)+1
(2*i)+2
(i-1)/2
Source: www.geeksforgeeks.org
Details: Suppose I have a Heap Like the following:
77
/ \
/ \
50 60
/ \ / \
22 30 44 55
Now, I want to insert another item 55 into this Heap. How to do this?
Answer: A binary heap is defined as a binary tree with two additional constraints:
We start adding child from the most left node and if the parent is lower than the newly added child than we replace them. And like so will go on until the child got the parent having value greater than it.
Your initial tree is:
77
/ \
/ \
50 60
/ \ / \
22 30 44 55
Now adding 55 according to the rule on most left side:
77
/ \
/ \
50 60
/ \ / \
22 30 44 55
/
55
But you see 22 is lower than 55 so replaced it:
77
/ \
/ \
50 60
/ \ / \
55 30 44 55
/
22
Now 55 has become the child of 50 which is still lower than 55 so replace them too:
77
/ \
/ \
55 60
/ \ / \
50 30 44 55
/
22
Now it cant be sorted more because 77 is greater than 55.
Source: stackoverflow.com
Answer: A Binary Heap is a Binary Tree with following properties:
10 10
/ \ / \
20 100 15 30
/ / \ / \
30 40 50 100 40
Source: www.geeksforgeeks.org
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: A linked list is a linear data structure where each element is a separate object. Each element (we will call it a node) of a list is comprising of two items - the data and a reference (pointer) to the next node. The last node has a reference to null. The entry point into a linked list is called the head of the list. It should be noted that head is not a separate node, but the reference to the first node. If the list is empty then the head is a null reference.
Source: www.cs.cmu.edu
Answer: There are some:
Source: www.thecrazyprogrammer.com
Answer: A cycle/loop occurs when a nodeβs next points back to a previous node in the list. The linked list is no longer linear with a beginning and endβinstead, it cycles through a loop of nodes.
Source: www.interviewcake.com
Answer:
Source: www.cs.cmu.edu
Answer:
O(n)
.O(n)
.O(1)
.O(1)
.Source: github.com
Answer: Few disadvantages of linked lists are :
Source: www.quora.com
Answer:
Nothing faster than O(n)
can be done. You need to traverse the list and alter pointers on every node, so time will be proportional to the number of elements.
Source: stackoverflow.com
push
and pop
for a Stack implemented using a LinkedList? ββAnswer:
O(1)
. Note, you don't have to insert at the end of the list. If you insert at the front of a (singly-linked) list, they are both O(1)
.
Stack contains 1,2,3:
[1]->[2]->[3]
Push 5:
[5]->[1]->[2]->[3]
Pop:
[1]->[2]->[3] // returning 5
Source: stackoverflow.com
Answer:
The add()
method below walks down the list until it finds the appropriate position. Then, it splices in the new node and updates the start
, prev
, and curr
pointers where applicable.
Note that the reverse operation, namely removing elements, doesn't need to change, because you are simply throwing things away which would not change any order in the list.
Source: stackoverflow.com
Answer: To convert a singly linked list to circular linked list, we will set next pointer of tail node to head pointer.
temp
.temp\->next = head
Source: www.techcrashcourse.com
Answer: To detect if a list is cyclic, we can check whether a node had been visited before. A natural way is to use a hash table.
Algorithm
We go through each node one by one and record each node's reference (or memory address) in a hash table. If the current node is null
, we have reached the end of the list and it must not be cyclic. If current nodeβs reference is in the hash table, then return true.
Source: leetcode.com
Answer: Linked lists are very useful when you need :
Using an array based list for these purposes has severe limitations:
Source: stackoverflow.com
Answer: A doubly linked list is simply a linked list where every element has both next and prev mebers, pointing at the elements before and after it, not just the one after it in a single linked list.
so to convert your list to a doubly linked list, just change your node to be:
private class Node
{
Picture data;
Node pNext;
Node pPrev;
};
and when iterating the list, on each new node add a reference to the previous node.
Source: stackoverflow.com
Answer: You can simulate a linked list by using two stacks. One stack is the "list," and the other is used for temporary storage.
This isn't terribly efficient, by the way, but it would in fact work.
Source: stackoverflow.com
O(1)
? βββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
k
consecutive linked list "parts" βββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Nth
element from the end of a singly Linked List? βββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(n1/2)
? ββββRead answer on π FullStack.Cafe
O(log n)
on a sorted Linked List? ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(n)
time? ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(1)
Space βββββRead answer on π FullStack.Cafe
Answer: Queue, as the name suggests is used whenever we need to manage any group of objects in an order in which the first one coming in, also gets out first while the others wait for their turn, like in the following scenarios:
Source: www.studytonight.com
Answer: A queue is a container of objects (a linear collection) that are inserted and removed according to the first-in first-out (FIFO) principle. The process to add an element into queue is called Enqueue and the process of removal of an element from queue is called Dequeue.
Source: www.cs.cmu.edu
Answer: Because they help manage your data in more a particular way than arrays and lists. It means that when you're debugging a problem, you won't have to wonder if someone randomly inserted an element into the middle of your list, messing up some invariants.
Arrays and lists are random access. They are very flexible and also easily corruptible. If you want to manage your data as FIFO or LIFO it's best to use those, already implemented, collections.
More practically you should:
Source: stackoverflow.com
Answer:
O(n)
.O(n)
.O(1)
.O(1)
.Source: github.com
Answer: Queue can be classified into following types:
Source: www.ques10.com
push
) ββDetails:
Given two queues with their standard operations (enqueue
, dequeue
, isempty
, size
), implement a stack with its standard operations (pop
, push
, isempty
, size
). The stack should be efficient when pushing an item.
Answer:
Given we have queue1
and queue2
:
push - O(1)
:
queue1
pop - O(n)
:
queue1
is bigger than 1, pipe (dequeue + enqueue) dequeued items from queue1
into queue2
queue1
, then switch the names of queue1
and queue2
Source: stackoverflow.com
Details: Suppose we have two stacks and no other temporary variable. Is to possible to "construct" a queue data structure using only the two stacks?
Answer:
Keep two stacks, let's call them inbox
and outbox
.
Enqueue:
inbox
Dequeue:
outbox
is empty, refill it by popping each element from inbox
and pushing it onto outbox
outbox
Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: There are some:
Source: stackoverflow.com
Answer:
Source: stackoverflow.com
Answer:
P.S. * Opportunistic decision making refers to a process where a person or group assesses alternative actions made possible by the favorable convergence of immediate circumstances recognized without reference to any general plan.
Source: www.quora.com
Details: Can you convert this recursion function into a loop?
A(x) {
if x<0 return 0;
return something(x) + A(x-1)
}
Answer: Any recursive function can be made to iterate (into a loop) but you need to use a stack yourself to keep the state.
A(x) {
temp = 0;
for i in 0..x {
temp = temp + something(i);
}
return temp;
}
Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: When the list is sorted we can use the binary search (also known as half-interval search, logarithmic search, or binary chop) technique to find items on the list. Here's a step-by-step description of using binary search:
min = 1
and max = n
.max
and min
rounded downΒ so that it is an integer.In this example we looking for array item with value 4
:
When you do one operation in binary search we reduce the size of the problem by half (look at the picture below how do we reduce the size of the problem area) hence the complexity of binary search is O(log n)
. The binary search algorithm can be written either recursively or iteratively.
Source: www.tutorialspoint.com
Answer:
Linear (sequential) search goes through all possible elements in some array and compare each one with the desired element. It may take up to O(n)
operations, where N is the size of an array and is widely considered to be horribly slow. In linear search when you perform one operation you reduce the size of the problem by one (when you do one operation in binary search you reduce the size of the problem by half). Despite it, it can still be used when:
O(1)
When you ask MySQL something like SELECT x FROM y WHERE z = t
, and z
is a column without an index, linear search is performed with all the consequences of it. This is why adding an index to searchable columns is important.
Source: bytescout.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(log n)
? βββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(log n)
on a sorted Linked List? ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
O(n)
time? ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: Bubble Sort is based on the idea of repeatedly comparing pairs of adjacent elements and then swapping their positions if they are in the wrong order. Bubble sort is a stable, in-place sort algorithm.
How it works:
n
elements, start with the first two elements and sort them in ascending order. (Compare the element to check which one is greater).n
until no more swaps are required.Visualisation:
Source: github.com
Answer: Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output. Sorting have direct applications in database algorithms, divide and conquer methods, data structure algorithms, and many more.
Source: en.wikipedia.org
Answer: The idea of an in-place algorithm isn't unique to sorting, but sorting is probably the most important case, or at least the most well-known. The idea is about space efficiency - using the minimum amount of RAM, hard disk or other storage that you can get away with.
The idea is to produce an output in the same memory space that contains the input by successively transforming that data until the output is produced. This avoids the need to use twice the storage - one area for the input and an equal-sized area for the output.
Quicksort is one example of In-Place Sorting.
Source: stackoverflow.com
Answer: The ideal sorting algorithm would have the following properties:
O(n log n)
key comparisons.O(n)
swaps.There is no algorithm that has all of these properties, and so the choice of sorting algorithm depends on the application.
Source: www.toptal.com
Answer: Sorting algorithms can be categorised based on the following parameters:
Selection Sort
requires the minimum number of swaps.O(n log n)
comparisons in the best case and O(n2)
comparisons in the worst case for most of the outputs.Quick Sort
, use recursive techniques to sort the input. Other sorting algorithms, such as Selection Sort
or Insertion Sort
, use non-recursive techniques. Finally, some sorting algorithm, such as Merge Sort
, make use of both recursive as well as non-recursive techniques to sort the input.stable
if the algorithm maintains the relative order of elements with equal keys. In other words, two equivalent elements remain in the same order in the sorted output as they were in the input.
Insertion sort
, Merge Sort
, and Bubble Sort
are stableHeap Sort
and Quick Sort
are not stableO(1)
extra space for sorting.
Insertion sort
and Quick-sort
are in place
sort as we move the elements about the pivot and do not actually use a separate array which is NOT the case in merge sort where the size of the input must be allocated beforehand to store the output during the sort.Merge Sort
is an example of out place
sort as it require extra memory space for itβs operations.Source: www.freecodecamp.org
Answer: Insertion Sort is an in-place, stable, comparison-based sorting algorithm. The idea is to maintain a sub-list which is always sorted. An element which is to be 'insert'ed in this sorted sub-list, has to find its appropriate place and then it has to be inserted there. Hence the name, insertion sort.
Steps on how it works:
Visualisation:
Source: medium.com
Answer: Advantages:
O(n)
.Disadvantages:
O(n^2)
time in worst as well as average case. Because of that Bubble sort does not deal well with a large set of data. For example Bubble sort is three times slower than Quicksort even for n = 100Source: en.wikipedia.org
Answer:
In Bubble sort, you know that after k
passes, the largest k
elements are sorted at the k
last entries of the array, so the conventional Bubble sort uses:
public static void bubblesort(int[] a) {
for (int i = 1; i < a.length; i++) {
boolean is_sorted = true;
for (int j = 0; j < a.length - i; j++) { // skip the already sorted largest elements, compare to a.length - 1
if (a[j] > a[j+1]) {
int temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
is_sorted = false;
}
}
if(is_sorted) return;
}
}
Now, that would still do a lot of unnecessary iterations when the array has a long sorted tail of largest elements. If you remember where you made your last swap, you know that after that index, there are the largest elements in order, so:
public static void bubblesort(int[] a) {
int lastSwap = a.length - 1;
for (int i = 1; i< a.length; i++) {
boolean is_sorted = true;
int currentSwap = -1;
for (int j = 0; j < lastSwap; j++) { // compare to a.length - i
if (a[j] > a[j+1]) {
int temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
is_sorted = false;
currentSwap = j;
}
}
if (is_sorted) return;
lastSwap = currentSwap;
}
}
This allows to skip over many elements, resulting in about a worst case 50% improvement in comparison count (though no improvement in swap counts), and adds very little complexity.
Source: stackoverflow.com
Answer:
The add()
method below walks down the list until it finds the appropriate position. Then, it splices in the new node and updates the start
, prev
, and curr
pointers where applicable.
Note that the reverse operation, namely removing elements, doesn't need to change, because you are simply throwing things away which would not change any order in the list.
Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: A Stack is a container of objects that are inserted and removed according to the last-in first-out (LIFO) principle. In the pushdown stacks only two operations are allowed: push the item into the stack, and pop the item out of the stack.
There are basically three operations that can be performed on stacks. They are:
A stack is a limited access data structure - elements can be added and removed from the stack only at the top. push adds an item to the top of the stack, pop removes the item from the top. A helpful analogy is to think of a stack of books; you can remove only the top book, also you can add a new book on the top.
Source: www.cs.cmu.edu
Answer: A stack is a recursive data structure, so it's:
Source: www.cs.cmu.edu
Answer: Because they help manage your data in more a particular way than arrays and lists. It means that when you're debugging a problem, you won't have to wonder if someone randomly inserted an element into the middle of your list, messing up some invariants.
Arrays and lists are random access. They are very flexible and also easily corruptible. If you want to manage your data as FIFO or LIFO it's best to use those, already implemented, collections.
More practically you should:
Source: stackoverflow.com
push
) ββDetails:
Given two queues with their standard operations (enqueue
, dequeue
, isempty
, size
), implement a stack with its standard operations (pop
, push
, isempty
, size
). The stack should be efficient when pushing an item.
Answer:
Given we have queue1
and queue2
:
push - O(1)
:
queue1
pop - O(n)
:
queue1
is bigger than 1, pipe (dequeue + enqueue) dequeued items from queue1
into queue2
queue1
, then switch the names of queue1
and queue2
Source: stackoverflow.com
Details: Suppose we have two stacks and no other temporary variable. Is to possible to "construct" a queue data structure using only the two stacks?
Answer:
Keep two stacks, let's call them inbox
and outbox
.
Enqueue:
inbox
Dequeue:
outbox
is empty, refill it by popping each element from inbox
and pushing it onto outbox
outbox
Source: stackoverflow.com
push
and pop
for a Stack implemented using a LinkedList? ββAnswer:
O(1)
. Note, you don't have to insert at the end of the list. If you insert at the front of a (singly-linked) list, they are both O(1)
.
Stack contains 1,2,3:
[1]->[2]->[3]
Push 5:
[5]->[1]->[2]->[3]
Pop:
[1]->[2]->[3] // returning 5
Source: stackoverflow.com
O(1)
ββDetails: Design a stack that supports push, pop, top, and retrieving the minimum element in constant time.
push(x)
- Push element x onto stack.pop()
- Removes the element on top of the stack.top()
- Get the top element.getMin()
- Retrieve the minimum element in the stack.Answer:
Using a linked list store the current minimum value. When you add a new number it looks at the previous min and changes the current min to the current value if the current value is lower. Note, each node stores the min
value at or below it so we don't need to recalculate min
on pop.
Answer: The followings are the steps to reversing a String using Stack:
String
to char[]
.Stack
.char[]
.String
.Answer:
Theyβre very useful because they afford you constant time O(1)
operations when inserting or removing from the front of a data structure. One common use of a stack is in compilers, where a stack can be used to make sure that the brackets and parentheses in a code file are all balanced, i.e., have an opening and closing counterpart. Stacks are also very useful in evaluating mathematical expressions.
Source: medium.com
Answer: You can simulate a linked list by using two stacks. One stack is the "list," and the other is used for temporary storage.
This isn't terribly efficient, by the way, but it would in fact work.
Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: A string is generally considered as a data type and is often implemented as an array data structure of bytes (or words) that stores a sequence of elements, typically characters, using some character encoding. String may also denote more general arrays or other sequence (or list) data types and structures.
Source: dev.to
Answer: Char arrays:
Strings:
Source: dev.to
Answer: The followings are the steps to reversing a String using Stack:
String
to char[]
.Stack
.char[]
.String
.Answer: Strings can either be mutable or immutable.
Source: dev.to
Answer:
A "string" is really just an array of char
s; a null-terminated string is one where a null character '\0'
marks the end of the string (not necessarily the end of the array). All strings in code (delimited by double quotes ""
) are automatically null-terminated by the compiler.
So for example, "hi"
is the same as {'h', 'i', '\0'}
.
Null-terminated strings are often a drain on performance, for the obvious reason that the time taken to discover the length depends on the length. The usual solution is to do both - keep the length and maintain the null terminator. It's not much extra work and means that you are always ready to pass the string to any function.
Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
char[]
preferred over String for passwords? ββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
null
terminated strings? βββββRead answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: Trees are well-known as a non-linear data structure. They donβt store data in a linear way. They organize data hierarchically.
A tree is a collection of entities called nodes. Nodes are connected by edges. Each node contains a value or data or key, and it may or may not have a child node. The first node of the tree is called the root. Leaves are the last nodes on a tree. They are nodes without children.
Source: www.freecodecamp.org
Answer: A normal tree has no restrictions on the number of children each node can have. A binary tree is made of nodes, where each node contains a "left" pointer, a "right" pointer, and a data element.
There are three different types of binary trees:
Source: study.com
Answer:
Source: medium.com
Answer: Binary search tree is a data structure that quickly allows to maintain a sorted list of numbers.
O(log n)
time.The properties that separates a binary search tree from a regular binary tree are:
Source: www.programiz.com
Answer:
That is a basic (generic) tree structure that can be used for String
or any other object:
Source: stackoverflow.com
Answer: Graph:
Tree:
Source: stackoverflow.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Answer: Trie (also called **digital tree **or prefix tree) is a tree-based data structure, which is used for efficient retrieval of a key in a large data-set of strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated; i.e., the value of the key is distributed across the structure. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Each complete English word has an arbitrary integer value associated with it (see image).
Source: medium.com
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe
Read answer on π FullStack.Cafe