Web Developer Interview questions and answers
Originally published on 👉 11 Painful Git Interview Questions You Will Cry On | FullStack.Cafe
Answer:
Source: stackoverflow.com
Answer:
In the simplest terms, git pull
does a git fetch
followed by a git merge
.
When you use pull
, Git tries to automatically do your work for you. It is context sensitive, so Git will merge any pulled commits into the branch you are currently working in. pull
automatically merges the commits without letting you review them first. If you don’t closely manage your branches, you may run into frequent conflicts.
When you fetch
, Git gathers any commits from the target branch that do not exist in your current branch and stores them in your local repository. However, it does not merge them with your current branch. This is particularly useful if you need to keep your repository up to date, but are working on something that might break if you update your files. To integrate the commits into your master branch, you use merge
.
Source: stackoverflow.com
Answer:
A branch is just a separate version of the code.
A pull request is when someone take the repository, makes their own branch, does some changes, then tries to merge that branch in (put their changes in the other person's code repository).
Source: stackoverflow.com
Answer: Say you have this, where C is your HEAD and (F) is the state of your files.
(F)
A-B-C
↑
master
git reset --hard HEAD~1
Now B is the HEAD. Because you used --hard, your files are reset to their state at commit B. 2. To undo the commit but keep your changes:
git reset HEAD~1
Now we tell Git to move the HEAD pointer back one commit (B) and leave the files as they are and git status
shows the changes you had checked into C.
3. To undo your commit but leave your files and your index
git reset --soft HEAD~1
When you do git status
, you'll see that the same files are in the index as before.
Source: stackoverflow.com
Answer: The command git cherry-pick is typically used to introduce particular commits from one branch within a repository onto a different branch. A common use is to forward- or back-port commits from a maintenance branch to a development branch.
This is in contrast with other ways such as merge and rebase which normally apply many commits onto another branch.
Consider:
git cherry-pick <commit-hash>
Source: stackoverflow.com
Answer: The Forking Workflow is fundamentally different than other popular Git workflows. Instead of using a single server-side repository to act as the “central” codebase, it gives every developer their own server-side repository. The Forking Workflow is most often seen in public open source projects.
The main advantage of the Forking Workflow is that contributions can be integrated without the need for everybody to push to a single central repository that leads to a clean project history. Developers push to their own server-side repositories, and only the project maintainer can push to the official repository.
When developers are ready to publish a local commit, they push the commit to their own public repository—not the official one. Then, they file a pull request with the main repository, which lets the project maintainer know that an update is ready to be integrated.
Source: atlassian.com
Answer:
Source: stackoverflow.com
Answer:
Gitflow workflow employs two parallel long-running branches to record the history of the project, master
and develop
:
Master - is always ready to be released on LIVE, with everything fully tested and approved (production-ready).
Hotfix - Maintenance or “hotfix” branches are used to quickly patch production releases. Hotfix branches are a lot like release branches and feature branches except they're based on master
instead of develop
.
Develop - is the branch to which all feature branches are merged and where all tests are performed. Only when everything’s been thoroughly checked and fixed it can be merged to the master
.
Feature - Each new feature should reside in its own branch, which can be pushed to the develop
branch as their parent one.
Source: atlassian.com
Answer:
The git stash
command takes your uncommitted changes (both staged and unstaged), saves them away for later use, and then reverts them from your working copy.
Consider:
$ git status
On branch master
Changes to be committed:
new file: style.css
Changes not staged for commit:
modified: index.html
$ git stash
Saved working directory and index state WIP on master: 5002d47 our new homepage
HEAD is now at 5002d47 our new homepage
$ git status
On branch master
nothing to commit, working tree clean
The one place we could use stashing is if we discover we forgot something in our last commit and have already started working on the next one in the same branch:
# Assume the latest commit was already done
# start working on the next patch, and discovered I was missing something
# stash away the current mess I made
$ git stash save
# some changes in the working dir
# and now add them to the last commit:
$ git add -u
$ git commit --ammend
# back to work!
$ git stash pop
Source: atlassian.com
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 112+ Behavioral Interview Questions for Software Developers | FullStack.Cafe
Answer:
Originally published on 👉 13 Tricky CSS3 Interview Questions And Answers to Stand Out on Interview in 2018 | FullStack.Cafe
Answer: Float is a CSS positioning property. Floated elements remain a part of the flow of the web page. This is distinctly different than page elements that use absolute positioning. Absolutely positioned page elements are removed from the flow of the webpage.
#sidebar {
float: right; // left right none inherit
}
The CSS clear property can be used to be positioned below left
/right
/both
floated elements.
Source: css-tricks.com
Answer: Both responsive and adaptive design attempt to optimize the user experience across different devices, adjusting for different viewport sizes, resolutions, usage contexts, control mechanisms, and so on.
Responsive design works on the principle of flexibility — a single fluid website that can look good on any device. Responsive websites use media queries, flexible grids, and responsive images to create a user experience that flexes and changes based on a multitude of factors. Like a single ball growing or shrinking to fit through several different hoops.
Adaptive design is more like the modern definition of progressive enhancement. Instead of one flexible design, adaptive design detects the device and other features, and then provides the appropriate feature and layout based on a predefined set of viewport sizes and other characteristics. The site detects the type of device used, and delivers the pre-set layout for that device. Instead of a single ball going through several different-sized hoops, you’d have several different balls to use depending on the hoop size.
Source: codeburst.io
Answer: When a browser displays a document, it must combine the document's content with its style information. It processes the document in two stages:
Source: developer.mozilla.org
Answer: Accessibility (a11y) is a measure of a computer system's accessibility is to all people, including those with disabilities or impairments. It concerns both software and hardware and how they are configured in order to enable a disabled or impaired person to use that computer system successfully.
Accessibility is also known as assistive technology.
Source: techopedia.com
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
translate()
instead of absolute
positioning, or vice-versa? And why? ⭐⭐⭐⭐Answer: Read Full Answer on 👉 FullStack.Cafe
Details:
Consider the three code fragments:
// A
h1
// B
#content h1
// C
<div id="content">
<h1 style="color: #fff">Headline</h1>
</div>
What the code fragment has the greater specificy?
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 15 ASP.NET Web API Interview Questions And Answers (2019 Update) | FullStack.Cafe
Answer: ASP.NET Web API is a framework that simplifies building HTTP services for broader range of clients (including browsers as well as mobile devices) on top of .NET Framework.
Using ASP.NET Web API, we can create non-SOAP based services like plain XML or JSON strings, etc. with many other advantages including:
Source: codeproject.com
Answer: Using ASP.NET Web API has a number of advantages, but core of the advantages are:
GET
, POST
, PUT
, DELETE
, etc. for all CRUD operationsMediaTypeFormatter
Source: codeproject.com
Answer: 500 – Internal Server Error
Consider:
[Route("CheckId/{id}")]
[HttpGet]
public IHttpActionResult CheckId(int id)
{
if(id > 100)
{
throw new ArgumentOutOfRangeException();
}
return Ok(id);
}
And the result:
Source: docs.microsoft.com
Answer:
Consider:
public class TweetsController : Controller {
// GET: /Tweets/
[HttpGet]
public ActionResult Index() {
return Json(Twitter.GetTweets(), JsonRequestBehavior.AllowGet);
}
}
or
public class TweetsController : ApiController {
// GET: /Api/Tweets/
public List<Tweet> Get() {
return Twitter.GetTweets();
}
}
Source: stackoverflow.com
Answer: A Web API controller action can return any of the following:
Source: medium.com
Answer:
Source: codeproject.com
Answer:
Source: codeproject.com
Answer: ASP.NET Web API v2 now support Attribute Routing along with convention-based approach. In convention-based routes, the route templates are already defined as follows:
Config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{Controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
So, any incoming request is matched against already defined routeTemplate and further routed to matched controller action. But it’s really hard to support certain URI patterns using conventional routing approach like nested routes on same controller. For example, authors have books or customers have orders, students have courses etc.
Such patterns can be defined using attribute routing i.e. adding an attribute to controller action as follows:
[Route("books/{bookId}/authors")]
public IEnumerable<Author> GetAuthorsByBook(int bookId) { ..... }
Source: webdevelopmenthelp.net
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 15 Amazon Web Services Interview Questions and Answers for 2018 | FullStack.Cafe
Answer:
Source: whizlabs.com
Answer: By default data on S3 is not encrypted, but all you could enable server-side encryption in your object metadata when you upload your data to Amazon S3. As soon as your data reaches S3, it is encrypted and stored.
Source: aws.amazon.com
Answer: Many different types of instances can be launched from one AMI. The type of an instance generally regulates the hardware components of the host computer that is used for the instance. Each type of instance has distinct computing and memory efficacy.
Once an instance is launched, it casts as host and the user interaction with it is same as with any other computer but we have a completely controlled access to our instances. AWS developer interview questions may contain one or more AMI based questions, so prepare yourself for the AMI topic very well.
Source: whizlabs.com
Answer: Use scp:
scp -i ec2key.pem username@ec2ip:/path/to/file .
Source: stackoverflow.com
Answer: AMI stands for the term Amazon Machine Image. It’s an AWS template which provides the information (an application server, and operating system, and applications) required to perform the launch of an instance. This AMI is the copy of the AMI that is running in the cloud as a virtual server. You can launch instances from as many different AMIs as you need. AMI consists of the followings:
Source: whizlabs.com
Answer: No. After you create a volume, you can attach it to any EC2 instance in the same Availability Zone. An EBS volume can be attached to only one EC2 instance at a time, but multiple volumes can be attached to a single instance.
Source: docs.aws.amazon.com
Answer: AWS Data Pipeline is a web service that you can use to automate the movement and transformation of data. With AWS Data Pipeline, you can define data-driven workflows, so that tasks can be dependent on the successful completion of previous tasks.
Source: docs.aws.amazon.com
Answer: There are four storage options for Amazon EC2 Instance:
Source: whizlabs.com
Details:
How can I find out the instance id
of an ec2 instance from within the ec2 instance?
Answer: Run:
wget -q -O - http://169.254.169.254/latest/meta-data/instance-id
Or on Amazon Linux AMIs you can do:
$ ec2-metadata -i
instance-id: i-1234567890abcdef0
Source: stackoverflow.com
Answer:
EC2 is Amazon's service that allows you to create a server (AWS calls these instances) in the AWS cloud. You pay by the hour and only what you use. You can do whatever you want with this instance as well as launch n number of instances.
Elastic Beanstalk is one layer of abstraction away from the EC2 layer. Elastic Beanstalk will setup an "environment" for you that can contain a number of EC2 instances, an optional database, as well as a few other AWS components such as a Elastic Load Balancer, Auto-Scaling Group, Security Group. Then Elastic Beanstalk will manage these items for you whenever you want to update your software running in AWS.
Source: stackoverflow.com
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Details: I have an Amazon EC2 micro instance (t1.micro). I want to upgrade this instance to large. This is our production environment, so what is the best and risk-free way to do this?
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 15 Best Continuous Integration Interview Questions (2018 Revision) | FullStack.Cafe
Answer: Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
Source: edureka.co
Answer:
Source: edureka.co
Answer: CI server function is to continuously integrate all changes being made and committed to repository by different developers and check for compile errors. It needs to build code several times a day, preferably after every commit so it can detect which commit made the breakage if the breakage happens.
Source: linoxide.com
Answer:
In Blue Green Deployment, you have TWO complete environments. One is Blue environment which is running and the Green environment to which you want to upgrade. Once you swap the environment from blue to green, the traffic is directed to your new green environment. You can delete or save your old blue environment for backup until the green environment is stable.
In Rolling Deployment, you have only ONE complete environment. The code is deployed in the subset of instances of the same environment and moves to another subset after completion.
Source: stackoverflow.com
Answer:
Immediately after allocation, all the quantity of a resource is available. Provision removes a quantity of a resource from the available set. De-provision returns a quantity of a resource to the available set. At any time:
Allocated quantity = Available quantity + Provisioned quantity
Source: dev.to
Answer:
Source: atlassian.com
Answer:
Gitflow workflow employs two parallel long-running branches to record the history of the project, master
and develop
:
Master - is always ready to be released on LIVE, with everything fully tested and approved (production-ready).
Hotfix - Maintenance or “hotfix” branches are used to quickly patch production releases. Hotfix branches are a lot like release branches and feature branches except they're based on master
instead of develop
.
Develop - is the branch to which all feature branches are merged and where all tests are performed. Only when everything’s been thoroughly checked and fixed it can be merged to the master
.
Feature - Each new feature should reside in its own branch, which can be pushed to the develop
branch as their parent one.
Source: atlassian.com
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Sticky session or a session affinity technique is another popular load balancing technique that requires a user session to be always served by an allocated machine.
In a load balanced server application where user information is stored in session it will be required to keep the session data available to all machines. This can be avoided by always serving a particular user session request from one machine. The machine is associated with a session as soon as the session is created. All the requests in a particular session are always redirected to the associated machine. This ensures the user data is only at one machine and load is also shared.
This is typically done by using SessionId cookie. The cookie is sent to the client for the first request and every subsequent request by client must be containing that same cookie to identify the session.
** What Are The Issues With Sticky Session?**
There are few issues that you may face with this approach
Source: fromdev.com
Answer: Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle.
As you prepare a new version of your software, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the software in Green, you switch the router so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
This technique can eliminate downtime due to application deployment. In addition, blue-green deployment reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.
Source: cloudfoundry.org
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 15 Essential HTML5 Interview Questions to Watch Out in 2018 | FullStack.Cafe
Answer:
Example:
<!DOCTYPE html>
<html>
<head>
<meta name="description" content="I am a web page with description">
<title>Home Page</title>
</head>
<body>
</body>
</html>
Source: github.com/FuelFrontend
Answer: In HTML, some elements have optional tags. In fact, both the opening and closing tags of some elements may be completely removed from an HTML document, even though the elements themselves are required.
Three required HTML elements whose start and end tags are optional are the html
, head
, and body
elements.
Source: computerhope.com
Answer:
Yes to both. The W3 documents state that the tags represent the header(<header>
) and footer(<footer>
) areas of their nearest ancestor "section". So not only can the page <body>
contain a header and a footer, but so can every <article>
and <section>
element.
Source: stackoverflow.com
Answer: The DOM (Document Object Model) is a cross-platform API that treats HTML and XML documents as a tree structure consisting of nodes. These nodes (such as elements and text nodes) are objects that can be programmatically manipulated and any visible changes made to them are reflected live in the document. In a browser, this API is available to JavaScript where DOM nodes can be manipulated to change their styles, contents, placement in the document, or interacted with through event listeners.
<head>
with a defer
attribute, or inside a DOMContentLoaded
event listener. Scripts that manipulate DOM nodes should be run after the DOM has been constructed to avoid errors.document.getElementById()
and document.querySelector()
are common functions for selecting DOM nodes.innerHTML
property to a new value runs the string through the HTML parser, offering an easy way to append dynamic HTML content to a node.Source: developer.mozilla.org
Answer:
HTML specifications such as HTML5
define a set of rules that a document must adhere to in order to be “valid” according to that specification. In addition, a specification provides instructions on how a browser must interpret and render such a document.
A browser is said to “support” a specification if it handles valid documents according to the rules of the specification. As of yet, no browser supports all aspects of the HTML5
specification (although all of the major browser support most of it), and as a result, it is necessary for the developer to confirm whether the aspect they are making use of will be supported by all of the browsers on which they hope to display their content. This is why cross-browser support continues to be a headache for developers, despite the improved specificiations.
HTML5
defines some rules to follow for an invalid HTML5
document (i.e., one that contains syntactical errors)Source: w3.org
localStorage
and sessionStorage
. ⭐⭐⭐Answer: With HTML5, web pages can store data locally within the user’s browser. The data is stored in name/value pairs, and a web page can only access data stored by itself.
Differences between localStorage
and sessionStorage
regarding lifetime:
localStorage
is permanent: it does not expire and remains stored on the user’s computer until a web app deletes it or the user asks the browser to delete it.sessionStorage
has the same lifetime as the top-level window or browser tab in which the data got stored. When the tab is permanently closed, any data stored through sessionStorage
is deleted.Differences between localStorage
and sessionStorage
regarding storage scope:
Both forms of storage are scoped to the document origin so that documents with different origins will never share the stored objects.
sessionStorage
is also scoped on a per-window basis. Two browser tabs with documents from the same origin have separate sessionStorage
data.localStorage
, the same scripts from the same origin can't access each other's sessionStorage
when opened in different tabs.Source: w3schools.com
Answer: HTML 5 adds a lot of new features to the HTML specification
New Doctype
Still using that pesky, impossible-to-memorize XHTML doctype?
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
If so, why? Switch to the new HTML5 doctype. You'll live longer -- as Douglas Quaid might say.
<!DOCTYPE html>
New Structure
<section>
- to define sections of pages<header>
- defines the header of a page<footer>
- defines the footer of a page<nav>
- defines the navigation on a page<article>
- defines the article or primary content on a page<aside>
- defines extra content like a sidebar on a page<figure>
- defines images that annotate an articleNew Inline Elements
These inline elements define some basic concepts and keep them semantically marked up, mostly to do with time:
<mark>
- to indicate content that is marked in some fashion<time>
- to indicate content that is a time or date<meter>
- to indicate content that is a fraction of a known range - such as disk usage<progress>
- to indicate the progress of a task towards completionNew Form Types
<input type="datetime">
<input type="datetime-local">
<input type="date">
<input type="month">
<input type="week">
<input type="time">
<input type="number">
<input type="range">
<input type="email">
<input type="url">
New Elements
There are a few exciting new elements in HTML 5:
<canvas>
- an element to give you a drawing space in JavaScript on your Web pages. It can let you add images or graphs to tool tips or just create dynamic graphs on your Web pages, built on the fly.<video>
- add video to your Web pages with this simple tag.<audio>
- add sound to your Web pages with this simple tag.No More Types for Scripts and Links
You possibly still add the type
attribute to your link
and script
tags.
<link rel="stylesheet" href="path/to/stylesheet.css" type="text/css" />
<script type="text/javascript" src="path/to/script.js"></script>
This is no longer necessary. It's implied that both of these tags refer to stylesheets and scripts, respectively. As such, we can remove the type
attribute all together.
<link rel="stylesheet" href="path/to/stylesheet.css" />
<script src="path/to/script.js"></script>
Make your content editable
The new browsers have a nifty new attribute that can be applied to elements, called contenteditable
. As the name implies, this allows the user to edit any of the text contained within the element, including its children. There are a variety of uses for something like this, including an app as simple as a to-do list, which also takes advantage of local storage.
<h2> To-Do List </h2>
<ul contenteditable="true">
<li> Break mechanical cab driver. </li>
<li> Drive to abandoned factory
<li> Watch video of self </li>
</ul>
Attributes
require
to mention the form field is requiredautofocus
puts the cursor on the input fieldSource: github.com/FuelFrontend
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 15+ Azure Interview Questions And Answers (2018 REVISIT) | FullStack.Cafe
Answer: By creating a cloud service, you can deploy a multi-tier web application in Azure, defining multiple roles to distribute processing and allow flexible scaling of your application. A cloud service consists of one or more web roles and/or worker roles, each with its own application files and configuration. Azure Websites and Virtual Machines also enable web applications on Azure. The main advantage of cloud services is the ability to support more complex multi-tier architectures
Source: mindmajix.com
Answer: Azure Functions is a solution for easily running small pieces of code, or "functions," in the cloud. We can write just the code we need for the problem at hand, without worrying about a whole application or the infrastructure to run it and use language of our choice such as C#, F#, Node.js, Java, or PHP. Azure Functions lets us develop serverless applications on Microsoft Azure.
Answer: Resource groups (RG) in Azure is an approach to group a collection of assets in logical groups for easy or even automatic provisioning, monitoring, and access control, and for more effective management of their costs. The underlying technology that powers resource groups is the Azure Resource Manager (ARM).
Source: onlinetech.com
Answer: Every Azure App Service web application includes a "hidden" service site called Kudu.
Kudu Console for example is a debugging service for Azure platform which allows you to explore your web app and surf the bugs present on it, like deployment logs, memory dump, and uploading files to your web app, and adding JSON endpoints to your web apps, etc.
Answer: Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data, such as text or binary data. Azure Storage offers three types of blobs:
Source: docs.microsoft.com
Answer: Service Fabric enables you to build applications that consist of microservices:
Stateless microservices (such as protocol gateways and web proxies) do not maintain a mutable state outside a request and its response from the service. Azure Cloud Services worker roles are an example of a stateless service.
Stateful microservices (such as user accounts, databases, devices, shopping carts, and queues) maintain a mutable, authoritative state beyond the request and its response.
Source: quora.com
Answer: Microsoft Azure Key Vault is a cloud-hosted management service that allows users to encrypt keys and small secrets by using keys that are protected by hardware security modules (HSMs). Small secrets are data less than 10 KB like passwords and .PFX files.
Source: searchwindowsserver.techtarget.com
Answer: Azure Multi-Factor Authentication (MFA) is Microsoft's two-step verification solution. It delivers strong authentication via a range of verification methods, including phone call, text message, or mobile app verification.
Source: docs.microsoft.com
Answer: Azure Table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. Because Table storage is schemaless, it's easy to adapt your data as the needs of your application evolve. Access to Table storage data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.
Source: docs.microsoft.com
Answer: The Azure Resource Manager (ARM) is the service used to provision resources in your Azure subscription. ARM provides us a way to describe resources in a resource group using JSON documents (ARM Template). by using the ARM Template you have a fully repeatable configuration of a given deployment and this is extremely valuable for Production environments but especially so for Dev/Test deployments. By having a set template, we can ensure that anytime a new Dev or Test deployment is required (which happens all the time), it can be achieved in moments and safe in the knowledge that it will be identical to the previous environments.
Source: codeisahighway.com
Answer: WebJobs is a feature of Azure App Service that enables you to run a program or script in the same context as a web app, API app, or mobile app. There is no additional cost to use WebJobs.
The Azure WebJobs SDK is a framework that simplifies the task of writing background processing code that runs in Azure WebJobs. It includes a declarative binding and trigger system that works with Azure Storage Blobs, Queues and Tables as well as Service Bus. You could also trigger Azure WebJob using Kudu API.
Source: github.com/Azure
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 19+ Expert Node.js Interview Questions in 2018 | FullStack.Cafe
Answer: Node.js is a web application framework built on Google Chrome's JavaScript Engine (V8 Engine).
Node.js comes with runtime environment on which a Javascript based script can be interpreted and executed (It is analogus to JVM to JAVA byte code). This runtime allows to execute a JavaScript code on any machine outside a browser. Because of this runtime of Node.js, JavaScript is now can be executed on server as well.
Node.js = Runtime Environment + JavaScript Library
Source: tutorialspoint.com
Answer:
Globally installed packages/dependencies are stored in
Source: tutorialspoint.com
Answer: Error-first callbacks are used to pass errors and data. The first argument is always an error object that the programmer has to check if something went wrong. Additional arguments are used to pass data.
fs.readFile(filePath, function(err, data) {
if (err) {
//handle the error
}
// use the data object
});
Source: tutorialspoint.com
Answer: Following are main benefits of using Node.js
Source: tutorialspoint.com
Answer: Node provides a single thread to programmers so that code can be written easily and without bottleneck. Node internally uses multiple POSIX threads for various I/O operations such as File, DNS, Network calls etc.
When Node gets I/O request it creates or uses a thread to perform that I/O operation and once the operation is done, it pushes the result to the event queue. On each such event, event loop runs and checks the queue and if the execution stack of Node is empty then it adds the queue result to execution stack.
This is how Node manages concurrency.
Source: codeforgeek.com
Answer: To do so you have more options:
yield
with Generators and/or Promises
Source: tutorialspoint.com
Answer: The event loop is what allows Node.js to perform non-blocking I/O operations — despite the fact that JavaScript is single-threaded — by offloading operations to the system kernel whenever possible.
Every I/O requires a callback - once they are done they are pushed onto the event loop for execution. Since most modern kernels are multi-threaded, they can handle multiple operations executing in the background. When one of these operations completes, the kernel tells Node.js so that the appropriate callback may be added to the poll queue to eventually be executed.
Source: blog.risingstack.com
Answer: By providing callback function. Callback function gets called whenever corresponding event triggered.
Source: tutorialspoint.com
Answer:
All objects that emit events are members of EventEmitter class. These objects expose an eventEmitter.on()
function that allows one or more functions to be attached to named events emitted by the object.
When the EventEmitter object emits an event, all of the functions attached to that specific event are called synchronously.
const EventEmitter = require('events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', () => {
console.log('an event occurred!');
});
myEmitter.emit('event');
Source: tutorialspoint.com
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Details: Consider following code snippet:
{
console.time("loop");
for (var i = 0; i < 1000000; i += 1) {
// Do nothing
}
console.timeEnd("loop");
}
The time required to run this code in Google Chrome is considerably more than the time required to run it in Node.js Explain why this is so, even though both use the v8 JavaScript Engine.
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Details: Consider the code:
async function check(req, res) {
try {
const a = await someOtherFunction();
const b = await somethingElseFunction();
res.send("result")
} catch (error) {
res.send(error.stack);
}
}
Rewrite the code sample without try/catch block.
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 20 .NET Core Interview Questions and Answers | FullStack.Cafe
Answer: The .NET Core platform is a new .NET stack that is optimized for open source development and agile delivery on NuGet.
.NET Core has two major components. It includes a small runtime that is built from the same codebase as the .NET Framework CLR. The .NET Core runtime includes the same GC and JIT (RyuJIT), but doesn’t include features like Application Domains or Code Access Security. The runtime is delivered via NuGet, as part of the ASP.NET Core package.
.NET Core also includes the base class libraries. These libraries are largely the same code as the .NET Framework class libraries, but have been factored (removal of dependencies) to enable to ship a smaller set of libraries. These libraries are shipped as System.*
NuGet packages on NuGet.org.
Source: stackoverflow.com
Answer: To be simple:
Source: stackoverflow.com
Answer:
Flexible deployment: Can be included in your app or installed side-by-side user- or machine-wide.
Cross-platform: Runs on Windows, macOS and Linux; can be ported to other OSes. The supported Operating Systems (OS), CPUs and application scenarios will grow over time, provided by Microsoft, other companies, and individuals.
Command-line tools: All product scenarios can be exercised at the command-line.
Compatible: .NET Core is compatible with .NET Framework, Xamarin and Mono, via the .NET Standard Library.
Open source: The .NET Core platform is open source, using MIT and Apache 2 licenses. Documentation is licensed under CC-BY. .NET Core is a .NET Foundation project.
Supported by Microsoft: .NET Core is supported by Microsoft, per .NET Core Support
Source: stackoverflow.com
Answer:
The SDK is all of the stuff that is needed/makes developing a .NET Core application easier, such as the CLI and a compiler.
The runtime is the "virtual machine" that hosts/runs the application and abstracts all the interaction with the base operating system.
Source: stackoverflow.com
Answer: The Common Type System (CTS) standardizes the data types of all programming languages using .NET under the umbrella of .NET to a common data type for easy and smooth communication among these .NET languages.
CTS is designed as a singly rooted object hierarchy with System.Object
as the base type from which all other types are derived. CTS supports two different kinds of types:
Source: c-sharpcorner.com
Answer:
Source: talkingdotnet.com
Answer: .NET as whole now has 2 flavors:
.NET Core and the .NET Framework have (for the most part) a subset-superset relationship. .NET Core is named “Core” since it contains the core features from the .NET Framework, for both the runtime and framework libraries. For example, .NET Core and the .NET Framework share the GC, the JIT and types such as String
and List
.
.NET Core was created so that .NET could be open source, cross platform and be used in more resource-constrained environments.
Source: stackoverflow.com
Answer:
A .NET runtime, which provides a type system, assembly loading, a garbage collector, native interop and other basic services.
A set of framework libraries, which provide primitive data types, app composition types and fundamental utilities.
A set of SDK tools and language compilers that enable the base developer experience, available in the .NET Core SDK.
The 'dotnet' app host, which is used to launch .NET Core apps. It selects the runtime and hosts the runtime, provides an assembly loading policy and launches the app. The same host is also used to launch SDK tools in much the same way.
Source: stackoverflow.com
Answer:
Xamarin usually runs on top of Mono, which is a version of .NET that was built for cross-platform support before Microsoft decided to officially go cross-platform with .NET Core. Like Xamarin, the Unity platform also runs on top of Mono.
Source: stackoverflow.com
Answer: CoreCLR is the .NET execution engine in .NET Core, performing functions such as garbage collection and compilation to machine code.
Consider:
Source: blogs.msdn.microsoft.com
Answer:
Thread represents an actual OS-level thread, with its own stack and kernel resources. Thread allows the highest degree of control; you can Abort() or Suspend() or Resume() a thread, you can observe its state, and you can set thread-level properties like the stack size, apartment state, or culture. ThreadPool is a wrapper around a pool of threads maintained by the CLR.
The Task class from the Task Parallel Library offers the best of both worlds. Like the ThreadPool, a task does not create its own OS thread. Instead, tasks are executed by a TaskScheduler; the default scheduler simply runs on the ThreadPool. Unlike the ThreadPool, Task also allows you to find out when it finishes, and (via the generic Task) to return a result.
Source: stackoverflow.com
Answer: Before a computer can execute the source code, special programs called compilers must rewrite it into machine instructions, also known as object code. This process (commonly referred to simply as “compilation”) can be done explicitly or implicitly.
Implicit compilation is a two-step process:
Source: telerik.com
Answer: Ahead of time (AOT) delivers faster start-up time, especially in large applications where much code executes on startup. But it requires more disk space and more memory/virtual address space to keep both the IL and precompiled images. In this case the JIT Compiler has to do a lot of disk I/O actions, which are quite expensive.
Source: telerik.com
Answer:
Use a .NET Standard library when you want to increase the number of apps that will be compatible with your library, and you are okay with a decrease in the .NET API surface area your library can access.
Use a .NET Core library when you want to increase the .NET API surface area your library can access, and you are okay with allowing only .NET Core apps to be compatible with your library.
Source: stackoverflow.com
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Yes. This might surprise many, but ASP.NET Core works with .NET framework and this is officially supported by Microsoft.
ASP.NET Core works with:
Source: talkingdotnet.com
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 20 Basic TypeScript Interview Questions (2018 Edition) | FullStack.Cafe
Details:
Answer: TypeScript is a superset of JavaScript which primarily provides optional static typing, classes and interfaces. One of the big benefits is to enable IDEs to provide a richer environment for spotting common errors as you type the code. For a large JavaScript project, adopting TypeScript might result in more robust software, while still being deployable where a regular JavaScript application would run.
In details:
--strictNullChecks
compiler flag) the TypeScript compiler will not allow undefined to be assigned to a variable unless you explicitly declare it to be of nullable type.Source: stackoverflow.com
Answer: Generics are able to create a component or function to work over a variety of types rather than a single one.
/** A class definition with a generic parameter */
class Queue<T> {
private data = [];
push = (item: T) => this.data.push(item);
pop = (): T => this.data.shift();
}
const queue = new Queue<number>();
queue.push(0);
queue.push("1"); // ERROR : cannot push a string. Only numbers allowed
Source: basarat.gitbooks.io
Answer: The answer is YES. There are 4 main principles to Object Oriented Programming:
TypeScript can implement all four of them with its smaller and cleaner syntax.
Source: jonathanmh.com
Answer: Just use:
if (value) {
}
It will evaluate to true
if value
is not:
null
undefined
NaN
''
0
false
TypesScript includes JavaScript rules.
Source: stackoverflow.com
Answer:
In TypeScript, the const
keyword cannot be used to declare class properties. Doing so causes the compiler to an error with "A class member cannot have the 'const' keyword." TypeScript 2.0 has the readonly
modifier:
class MyClass {
readonly myReadonlyProperty = 1;
myMethod() {
console.log(this.myReadonlyProperty);
}
}
new MyClass().myReadonlyProperty = 5; // error, readonly
Source: stackoverflow.com
Answer:
.map
files are source map files that let tools map between the emitted JavaScript code and the TypeScript source files that created it. Many debuggers (e.g. Visual Studio or Chrome's dev tools) can consume these files so you can debug the TypeScript file instead of the JavaScript file.
Source: stackoverflow.com
Answer: TypeScript supports getters/setters as a way of intercepting accesses to a member of an object. This gives you a way of having finer-grained control over how a member is accessed on each object.
class foo {
private _bar:boolean = false;
get bar():boolean {
return this._bar;
}
set bar(theBar:boolean) {
this._bar = theBar;
}
}
var myBar = myFoo.bar; // correct (get)
myFoo.bar = true; // correct (set)
Source: typescriptlang.org
Answer: Typescript doesn’t only work for browser or frontend code, you can also choose to write your backend applications. For example you could choose Node.js and have some additional type safety and the other abstraction that the language brings.
npm i -g typescript
{
"compilerOptions": {
"target": "es5",
"module": "commonjs",
"declaration": true,
"outDir": "build"
}
}
tsc
node build/index.js
Source: jonathanmh.com
Answer: There are mainly 3 components of TypeScript .
Source: talkingdotnet.com
Details: Consider:
class Point {
x: number;
y: number;
}
interface Point3d extends Point {
z: number;
}
let point3d: Point3d = {x: 1, y: 2, z: 3};
Answer: Yes, the code is valid. A class declaration creates two things: a type representing instances of the class and a constructor function. Because classes create types, you can use them in the same places you would be able to use interfaces.
Source: typescriptlang.org
Answer:
Decorators can be used to modify the behavior of a class or become even more powerful when integrated into a framework. For instance, if your framework has methods with restricted access requirements (just for admin), it would be easy to write an @admin
method decorator to deny access to non-administrative users, or an @owner
decorator to only allow the owner of an object the ability to modify it.
class CRUD {
get() { }
post() { }
@admin
delete() { }
@owner
put() { }
}
Source: www.sitepen.com
Details: Consider the code:
class Foo {
save(callback: Function) : void {
//Do the save
var result : number = 42; //We get a number from the save operation
//Can I at compile-time ensure the callback accepts a single parameter of type number somehow?
callback(result);
}
}
var foo = new Foo();
var callback = (result: string) : void => {
alert(result);
}
foo.save(callback);
Can you make the result parameter in save
a type-safe function? Rewrite the code to demonstrate.
Answer:
In TypeScript you can declare your callback type like:
type NumberCallback = (n: number) => any;
class Foo {
// Equivalent
save(callback: NumberCallback): void {
console.log(1)
callback(42);
}
}
var numCallback: NumberCallback = (result: number) : void => {
console.log("numCallback: ", result.toString());
}
var foo = new Foo();
foo.save(numCallback)
Source: stackoverflow.com
Answer: Classes define in a module are available within the module. Outside the module you can’t access them.
module Vehicle {
class Car {
constructor (
public make: string,
public model: string) { }
}
var audiCar = new Car("Audi", "Q7");
}
// This won't work
var fordCar = Vehicle.Car("Ford", "Figo");
As per above code, fordCar
variable will give us compile time error. To make classes accessible outside module use export
keyword for classes.
module Vehicle {
export class Car {
constructor (
public make: string,
public model: string) { }
}
var audiCar = new Car("Audi", "Q7");
}
// This works now
var fordCar = Vehicle.Car("Ford", "Figo");
Source: http://www.talkingdotnet.com
Answer: Yes, TypeScript does support function overloading but the implementation is a bit different if we compare it to OO languages. We are creating just one function and a number of declarations so that TypeScript doesn't give compile errors. When this code is compiled to JavaScript, the concrete function alone will be visible. As a JavaScript function can be called by passing multiple arguments, it just works.
class Foo {
myMethod(a: string);
myMethod(a: number);
myMethod(a: number, b: string);
myMethod(a: any, b?: string) {
alert(a.toString());
}
}
Source: typescriptlang.org
Details:
/* WRONG */
interface Fetcher {
getObject(done: (data: any, elapsedTime?: number) => void): void;
}
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Details:
interface X {
a: number
b: string
}
type X = {
a: number
b: string
};
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Originally published on 👉 20 Reactive Programming Interview Questions To Polish Up In 2019 | FullStack.Cafe
Answer: Read Full Answer on 👉 FullStack.Cafe
Answer: The Reactive Manifesto is a document that defines the core principles of reactive programming. It was first released in 2013 by a group of developers led by a man called Jonas Boner. The Reactive Manifesto underpins the principles of reactive programming.
Source: reactivemanifesto.org
Answer: Reactive programming is programming with asynchronous data streams. Event buses or your typical click events are really an asynchronous event stream, on which you can observe and do some side effects. Reactive is that idea on steroids. You are able to create data streams of anything, not just from click and hover events. Streams are cheap and ubiquitous, anything can be a stream: variables, user inputs, properties, caches, data structures, etc. For example, imagine your Twitter feed would be a data stream in the same fashion that click events are. You can listen to that stream and react accordingly.
Source: github.com
Answer: A stream is a sequence of ongoing events ordered in time. It can emit three different things: a value (of some type), an error, or a "completed" signal.