Top 50 Network Administrator Interview Questions You Must Prepare 19.Mar.2024

Boot to LAN is most often used when you are doing a fresh install on a system. What you would do is setup a network-based installer capable of network-booting via PXE. Boot to LAN enables this by allowing a pre-boot environment to look for a DHCP server and connect to the broadcasting network installation server. Environments that have very large numbers of systems more often than not have the capability of pushing out images via the network. This reduces the amount of hands-on time that is required on each system, and keeps the installs more consistent.

ARP, or Address Resolution Protocol can be likened to DNS for MAC Addresses. Standard DNS allows for the mapping of human-friendly URLs to IP addresses, while ARP allows for the mapping of IP addresses to MAC addresses. In this way it lets systems go from a regular domain name down to the actual piece of hardware it resides upon.

RDP or Remote Desktop Protocol is the primary method by which Windows Systems can be remotely accessed for troubleshooting and is a software-driven method. KVM or Keyboard Video and Mouse on the other hand allows for the fast-switching between many different systems, but using the same keyboard, monitor and mouse for all. KVM is usually a hardware-driven system, with a junction box placed between the user and the systems in question- but there are some options that are enhanced by software. KVM also doesn’t require an active network connection, so it can be very useful for using the same setup on multiple networks without having cross-talk.

Similar to how a DNS server caches the addresses of accessed websites, a proxy server caches the contents of those websites and handles the heavy lifting of access and retrieval for users. Proxy servers can also maintain a list of blacklisted and whitelisted websites so as to prevent users from getting easily preventable infections. Depending on the intentions of the company, Proxy servers can also be used for monitoring web activity by users to make sure that sensitive information is not leaving the building. Proxy servers also exist as Web Proxy servers, allowing users to either not reveal their true access point to websites they are accessing and/or getting around region blocking.

However there are two main differences between the Windows Home edition and Windows Professional: Joining a domain and built-in encryption. Both features are active in Professional only, as joining a domain is nearly a mandatory requirement for businesses. EFS (Encrypted File System) in and its successor Bitlocker are both also only present in Pro. While there are workarounds for both of these items, they do present a nice quality-of-life boost as well as allow easier standardization across multiple systems.

That being said, the jump from Windows Pro to Windows Server is a monumental paradigm shift. While we could go through all of the bells and whistles of what makes Windows Server…Windows Server, it can be summed up very briefly as this: Windows Home and Pro are designed to connect outwards by default and are optimized as such. Windows Server is designed to have other objects connect to it, and as a result it is optimized severely for this purpose. Windows Server 2012 has taken this to a new extreme with being able to perform an installation style very similar to that of a Unix/Linux system with no GUI whatsoever. As a result, they claim that the attack vector of the Operating System has been reduced massively (when installing it in that mode)

At a very basic level, there really isn’t one. As you progress up the chain however, you start to realize that there actually are a lot of differences in the power available to users (and admins) depending on how much you know about the different interfaces. Each of these utilities is a CLI- Command Line Interface- that allows for direct access to some of the most powerful utilities and settings in their respective operating systems. Command Prompt (cmd) is a Windows utility based very heavily on DOS commands, but has been updated over the years with different options such as long filename support.

Bash (short for Bourne-Again Shell) on the other hand is the primary me of managing Unix/Linux operating systems and has a great deal more power than many of its GUI counterparts. Any Windows user that is used to cmd will recognize some of the commands due to the fact that DOS was heavily inspired by Unix and thus many commands have versions that exist in Bash. That being said, they may not be the best ones to use; for example while list contents (dir) exists in Bash, the recommended method would be to use list (ls) as it allows for much easier-to-understand formatting. Powershell, a newer Windows Utility, can be considered a hybrid of these two systems- allowing for the legacy tools of the command prompt with some of the much more powerful scripting functions of Bash.

While we’re on the subject of Apple, Appletalk is a protocol developed by Apple to handle networking with little to no configuration (you may be sensing a pattern here). It reached its peak in the late 80s and early 90s, but there are still some devices that utilize this protocol. Most of its core technology has been moved over to Bonjour, while UPnP (Universal Plug and Play) has picked up on its ideology and moved the concept forward across many different hardware and software packages.

When trying to communicate with systems on the inside of a secured network, it can be very difficult to do so from the outside- and with good reason. Therefore, the use of a port forwarding table within the router itself or other connection management device, can allow for specific traffic to be automatically forwarded on to a particular destination. For example, if you had a web server running on your network and you wanted access to be granted to it from the outside, you would setup port forwarding to port 80 on the server in question. This would mean that anyone putting in your IP address in a web browser would be connected up to the server’s website immediately. Please note, this is usually not recommended to allow access to a server from the outside directly into your network.

Logon scripts are, surprisingly enough, scripts that run at logon time. These are used most times to allow for the continued access to share and device mapping as well as forcing updates and configuration changes. In this way, it allows for one-step modifications if servers get changed, shares get renamed, or printers get switched out for example.

As you can see from the demonstration up above, if you try to work out permissions for every single person in your organization individually you can give yourself a migraine pretty quickly. Therefore, trying to simplify permissions but keep them strong is critical to administering a large network. Groups allow users to be pooled by their need to know and need to access particular information. In this way, it allows the administrator to set the permissions once- for the group- then add users to that group. When modifications to permissions need to be made, its one change that affects all members of that group.

Dynamic Host Configuration Protocol is the default way for connecting up to a network. The implementation varies across Operating Systems, but the simple explanation is that there is a server on the network that hands out IP addresses when requested. Upon connecting to a network, a DHCP request will be sent out from a new member system. The DHCP server will respond and issue an address lease for a varying amount of time. If the system connects to another network, it will be issued a new address by that server but if it re-connects to the original network before the lease is up- it will be re-issued that same address that it had before.

To illustrate this point, say you have your phone set to wifi at your home. It will pick up a DHCP address from your router, before you head to work and connect to your corporate network. It will be issued a new address by your DHCP server before you go to starbucks for your mid-morning coffee where you’ll get another address there, then at the local restaurant where you get lunch, then at the grocery store, and so on and so on.

Giving a user as few privileges as possible tends to cause some aggravation by the user, but by the same token it also removes a lot of easily preventable infection vectors. Still, sometimes users need to have local admin rights in order to troubleshoot issues- especially if they’re on the road with a laptop. Therefore, creating a local admin account may sometimes be the most effective way to keep these privileges separate.

ICMP is the Internet Control Message Protocol. Most users will recognize the name through the use of tools such as ping and traceroute, as this is the protocol that these services run over among other things. Its primary purpose is to tell systems when they are trying to connect remotely if the other end is available. Like TCP and UDP, it is a part of the IP suite and uses IP port number @Please note, this is not TCP port 1 or UDP port 1 as this is a different numbering scheme that for reference can be located here (For your reference, TCP uses IP port 6, while UDP uses IP port 17). That being said, different functions of ICMP use specific ports on TCP and UDP. For example, the ‘echo’ portion of ping (the part where someone else is able to ping you) uses TCP port 7.

The ability to remote into servers without having to actually be there is one of the most convenient methods of troubleshooting or running normal functions on a server- Terminal Services allow this capability for admins, but also another key function for standard users: the ability to run standard applications without having to have them installed on their local computers. In this way, all user profiles and applications can be maintained from a single location without having to worry about patch management and hardware failure on multiple systems.

An excellent guide to password strength can be found on Wikipedia’s password strength entry located here.

  • “Use a minimum password length of 12 to 14 characters if permitted.
  • Include lowercase and uppercase alphabetic characters, numbers and symbols if permitted.
  • Generate passwords randomly where feasible.
  • Avoid using the same password twice (eg. across multiple user accounts and/or software systems).
  • Avoid character repetition, keyboard patterns, dictionary words, letter or number sequences, usernames, relative or pet names, romantic links (current or past) and biographical information (e.g. ID numbers, ancestors’ names or dates).
  • Avoid using information that is or might become publicly associated with the user or the account.
  • Avoid using information that the user’s colleagues and/or acquaintances might know to be associated with the user.
  • Do not use passwords which consist wholly of any simple combination of the aforementioned weak components.”

A Firewall put simply keeps stuff from here talking to stuff over there. Firewalls exist in many different possible configurations with both hardware and software options as well as network and host varieties. Most of the general user base had their first introduction to Firewalls when Windows XP SP2 came along with Windows Firewall installed. This came with a lot of headaches, but to Microsoft’s credit it did a lot of good things. Over the years it has improved a great deal and while there are still many options that go above and beyond what it does, what Windows Firewall accomplishes it does very well. Enhanced server-grade versions have been released as well, and have a great deal of customization available to the admin.

SSH or Secure Shell is most well known by Linux users, but has a great deal that it can be used for. SSH is designed to create a secure tunnel between devices, whether that be systems, switches, thermostats, toasters, etc. SSH also has a unique ability to tunnel other programs through it, similar in concept to a VPN so even insecure programs or programs running across unsecure connections can be used in a secure state if configured correctly. SSH runs over TCP port 22.

HTTP or HyperText Trfer Protocol, is the main protocol responsible for shiny content on the Web. Most webpages still use this protocol to trmit their basic website content and allows for the display and navigation of ‘hypertext’ or links. While HTTP can use a number of different carrier protocols to go from system to system, the primary protocol and port used is TCP port 80.

The Encrypted File System, Microsoft’s built-in file encryption utility has been around for quite some time. Files that have been encrypted in such a way can appear in Windows Explorer with a green tint as opposed to the black of normal files or blue for NTFS compressed files. Files that have been encrypted are tied to the specific user, and it can be difficult to decrypt the file without the user’s assistance. On top of this, if the user loses their password it can become impossible to decrypt the files as the decryption process is tied to the user’s login and password. EFS can only occur on NTFS formatted partitions, and while it is capable of encrypting entire drives it is most often reserved to individual files and folders. For larger purposes, Bitlocker is a better alternative.

Tracert or traceroute depending on the operating system allows you to see exactly what routers you touch as you move along the chain of connections to your final destination. If you end up with a problem where you can’t connect or can’t ping your final destination, a tracert can help in that regard as you can tell exactly where the chain of connections stop. With this information, you can contact the correct people- whether it be your own firewall, your ISP, your destination’s ISP or somewhere in the middle. Tracert, like ping, uses the ICMP protocol but also has the ability to use the first step of the TCP three-way handshake to send out SYN requests for a response.

  • At first glance it may be difficult to judge the difference between a hub and a switch since both look roughly the same. They both have a large number of potential connections and are used for the same basic purpose- to create a network. However the biggest difference is not on the outside, but on the inside in the way that they handle connections. 
  • In the case of a hub, it broadcasts all data to every port. This can make for serious security and reliability concerns, as well as cause a number of collisions to occur on the network. Old style hubs and present-day wireless access points use this technique. 
  • Switches on the other hand create connections dynamically, so that usually only the requesting port can receive the information destined for it. An exception to this rule is that if the switch has its maintenance port turned on for an NIDS implementation, it may copy all data going across the switch to a particular port in order to scan it for problems. The easiest way to make sense of it all is by thinking about it in the case of old style phone connections.
  • A hub would be a ‘party line’ where everybody is talking all at the same time. It is possible to trmit on such a system, but it can be very hectic and potentially release information to people that you don’t want to have access to it. A switch on the other hand is like a phone operator- creating connections between ports on an as-needed basis.

When you’re working in Active Directory, you see a tree-type structure going down through various organizational units (OU’s). The easiest way to explain this is to run through a hypothetical example.

Say that we had a location reporting for CNN that dealt with nothing but the Detroit Lions. So we would setup a location with a single domain, and computers for each of our users. This would mean starting at the bottom: OU’s containing the users, groups and computers are at the lowest level of this structure. A Domain is a collection of these OU’s as well as the policies and other rules governing them. So we could call this domain ‘CNNDetroitLions”. A single domain can cover a wide area and include multiple physical sites, but sometimes you need to go bigger.

A tree is a collection of domains bundled together by a common domain trunk, rules, and structure. If CNN decided to combine all of its football team sites together in a common group, so that its football sports reporters could go from one location to the next without a lot of problems, then that would be a tree. So then our domain could be joined up into a tree called ‘football’, and then the domain would be ‘CNNDetroitLions.football’ while another site could be called ‘CNNChicagoBears.football’.

Sometimes you still need to go bigger still, where a collection of trees is bundled together into a Forest. Say CNN saw that this was working great and wanted to bring together all of its reporters under a single unit- any reporter could login to any CNN controlled site and call this Forest ‘cnn.com’ So then our domain would become ‘CNNDetroitLions.football.cnn.com’ with another member of this same Forest could be called ‘CNNNewYorkYankees.baseball.cnn.com’, while yet another member could be ‘CNNLasVegas.poker.cnn.com’. Typically the larger an organization, the more complicated it becomes to administer, and when you get to something as large as this it becomes exponentially more difficult to police.

For the IP address that most people are familiar with (IPv4), there are 4 sets (octets) of numbers, each with values of up to 25@You likely have run into this when troubleshooting a router or a DHCP server, when they are giving out addresses in a particular range- usually 192.x or 10.x in the case of a home or commercial network. IP classes are primarily differentiated by the number of potential hosts they can support on a single network. The more networks supported on a given IP class, the fewer addresses are available for each network. Class A networks run up to 127.x.x.x (with the exception of 127.0.0.1, which is reserved for loopback or localhost connections).

These networks are usually reserved for the very largest of customers, or some of the original members of the Internet and xkcd has an excellent map (albeit a bit dated) located here showing who officially owns what. Class B (128.x to 191.x) and Class C (192.x to 223.x) networks are much more fuzzy at the top level about who officially owns them. Class C addresses are primarily reserved for in-house networks which is as we mentioned above why so many different manufacturers use 192.x as their default setting. Class D and E are reserved for special uses and normally are not required knowledge.

 

ipconfig is one of the primary network connection troubleshooting and information tools available for Windows Operating Systems. It allows the user to see what the current information is, force a release of those settings if set by DHCP, force a new request for a DHCP lease, and clear out the local DNS cache among other functions it is able to handle. ifconfig is a similar utility for Unix/Linux systems that while at first glance seems to be identical, it actually isn’t. It does allow for very quick (and thorough) access to network connection information, it does not allow for the DHCP functions that ipconfig does. These functions in fact are handled by a separate service/daemon called dhcpd.

The three basic ways to authenticate someone are: something they know (password), something they have (token), and something they are (biometrics). Two-factor authentication is a combination of two of these methods, oftentimes using a password and token setup, although in some cases this can be a PIN and thumbprint.

Even if you don’t recognize anything else on this list, you like have heard of TCP/IP before. Contrary to popular believe, TCP/IP is not actually a protocol, but rather TCP is a member of the IP protocol suite. TCP stands for Trmission Control Protocol and is one of the big big mindbogglingly massively used protocols in use today.

Almost every major protocol that we use on a daily basis- HTTP, FTP and SSH among a large list of others- utilizes TCP. The big benefit to TCP is that it has to establish the connection on both ends before any data begins to flow. It is also able to sync up this data flow so that if packets arrive out of order, the receiving system is able to figure out what the puzzle of packets is supposed to look like- that this packet goes before this one, this one goes here, this one doesn’t belong at all and looks sort of like a fish, etc. Because the list of ports for TCP is so massive, charts are commonplace to show what uses what, and Wikipedia’s which can be found here is excellent for a desk reference.

A print server can refer to two different options- an actual server that shares out many different printers from a central administration point, or a small dedicated box that allows a legacy printer to connect to a network jack. A network attached printer on the other hand has a network card built into it, and thus has no need for the latter option. It can still benefit from the former however, as network attached printers are extremely useful in a corporate environment since they do not require the printer to be connected directly to a single user’s system.

SNMP is the “Simple Network Management Protocol”. Most systems and devices on a network are able to tell when they are having issues and present them to the user through either prompts or displays directly on the device. For administrators unfortunately, it can be difficult to tell when there is a problem unless the user calls them over. On devices that have SNMP enabled however, this information can be broadcast and picked up by programs that know what to look for. In this way, reports can be run based on the current status of the network, find out what patches are current not installed, if a printer is jammed, etc. In large networks this is a requirement, but in any size network it can serve as a resource to see how the network is fairing and give a baseline of what its current health is.

The simple wer is that Multimode is cheaper but can’t trmit as far. Single Mode has a smaller core (the part that handles light) than Multimode, but is better at keeping the light intact. This allows it to travel greater distances and at higher bandwidths than Multimode. The problem is that the requirements for Single Mode are very specific and as a result it usually is more expensive than Multimode. Therefore for applications, you will usually see Multimode in the datacenter with Single Mode for long-haul connections.

Although you may never have heard of this program, but if you have ever dealt with Apple devices you’ve seen its effects. Bonjour is one of the programs that come bundled with nearly every piece of Apple software (most notably iTunes) that handles a lot of its automatic discovery techniques. Best described as a hybrid of IPX and DNS, Bonjour discovers broadcasting objects on the network by using mDNS (multicast DNS) with little to no configuration required. Many admins will deliberately disable this service in a corporate environment due to potential security issues, however in a home environment it can be left up to the user to decide if the risk is worth the convenience.

FTP or File Trfer Protocol, is one of the big legacy protocols that probably should be retired. FTP is primarily designed for large file trfers, with the capability of resuming downloads if they are interrupted. Access to an FTP server can be accomplished using two different techniques: Anonymous access and Standard Login. Both of these are basically the same, except Anonymous access does not require an active user login while a Standard Login does. Here’s where the big problem with FTP lies however- the credentials of the user are trmitted in cleartext which me that anybody listening on the wire could sniff the credentials extremely easily. Two competing implementations of FTP that take care this issue are SFTP (FTP over SSH) and FTPS (FTP with SSL). FTP uses TCP ports 20 and 21.

An IDS is an Intrusion Detection System with two basic variations: Host Intrusion Detection Systems and Network Intrusion Detection Systems. An HIDS runs as a background utility in the same as an anti-virus program for instance, while a Network Intrusion Detection System sniffs packets as they go across the network looking for things that aren’t quite ordinary. Both systems have two basic variants- signature based and anomaly based. Signature based is very much like an anti-virus system, looking for known values of known ‘bad things’ while anomaly looks more for network traffic that doesn’t fit the usual pattern of the network. This requires a bit more time to get a good baseline, but in the long term can be better on the uptake for custom attacks.

HTTPS or Secure HTTP (Not to be confused with SHTTP, which is an unrelated protocol), is HTTP’s big brother. Designed to be able to be used for identity verification, HTTPS uses SSL certificates to be able to verify that the server you are connecting to is the one that it says it is. While there is some encryption capability of HTTPS, it is usually deemed not enough and further encryption methods are desired whenever possible. HTTPS traffic goes over TCP port 443.

Also known as the program that can give your admin nightmares, telnet is a very small and versatile utility that allows for connections on nearly any port. Telnet would allow the admin to connect into remote devices and administer them via a command prompt. In many cases this has been replaced by SSH, as telnet trmits its data in cleartext (like ftp). Telnet can and does however get used in cases where the user is trying to see if a program is listening on a particular port, but they want to keep a low profile or if the connection type pre-dates standard network connectivity methods.

/etc/passwd is the primary file in Unix/Linux operating system that stores information about user accounts and can be read by all users. /etc/shadow many times is used by the operating system instead due to security concerns and increased hashing capabilities. /etc/shadow more often than not is highly restricted to privileged users.

External Media has been used for backups for a very long time, but has started to fall out of favor in the past few years due to its speed limitations. As capacities continue to climb higher and higher, the amount of time it takes to not only perform a backup but also a restore skyrockets. Tapes have been particularly hit hard in this regard, primarily because they were quite sluggish even before the jump to the terabyte era. Removable hard disks have been able to pick up on this trend however, as capacity and price have given them a solid lead in front of other options. But this takes us back to the question- why use EXTERNAL media? Internal media usually is able to connect faster, and is more reliable correct? Yes and no. While the estimated lifetime of storage devices has been steadily going up, there is always the chance for user error, data corruption, or hiccups on the hard disk. As a result, having regular backups to external media is still one of the best bang-for-buck methods available. Removable hard disks now have the capability to connect very rapidly, even without the use of a dedicated hot-swap drive bay. Through eSATA or USB3, these connections are nearly as fast as if they were plugged directly into the motherboard.

Sticky ports are one of the network admin’s best friends and worst headaches. They allow you to set up your network so that each port on a switch only permits one (or a number that you specify) computer to connect on that port by locking it to a particular MAC address. If any other computer plugs into that port, the port shuts down and you receive a call that they can’t connect anymore. If you were the one that originally ran all the network connections then this isn’t a big issue, and likewise if it is a predictable pattern then it also isn’t an issue. However if you’re working in a hand-me-down network where chaos is the norm then you might end up spending a while toning out exactly what they are connecting to.

If you did any multiplayer PC gaming in the 90s and early 2000s, you likely knew of the IPX protocol as ‘the one that actually works’. IPX or Internetwork Packet Exchange was an extremely lightweight protocol, which as a result for the limits of computers of the age was a very good thing. A competitor to TCP/IP, it functions very well in small networks and didn’t require elements like DHCP and required little to no configuration, but does not scale well for applications like the Internet. As a result, it fell by the wayside and is now not a required protocol for most elements.

A workgroup is a collection of systems each with their own rules and local user logins tied to that particular system. A Domain is a collection of systems with a centralized authentication server that tells them what the rules are. While workgroups work effectively in small numbers, once you pass a relatively low threshold (usually anything more than say 5 systems), it becomes increasingly difficult to manage permissions and sharing effectively. To put this another way, a workgroup is very similar to a P2P network- each member is its own island and chooses what it decides to share with the rest of the network. Domains on the other hand are much more like a standard client/server relationship- the individual members of the domain connect to a central server which handles the heavy lifting and standardization of sharing and access permissions.

A subnet mask tells the network how big it is. When an address is inside the mask, it will be handled internally as a part of the local network. When it is outside, it will be handled differently as it is not part of the local network. The proper use and calculation of a subnet mask can be a great benefit when designing a network as well as for gauging future growth.

Virtual Machines have only recently come into mainstream use, however they have been around under many different names for a long time. With the massive growth of hardware outstripping software requirements, it is now possible to have a server lying dormant 90% of the time while having other older systems at max capacity. Virtualizing those systems would allow the older operating systems to be copied completely and running alongside the server operating system- allowing the use of the newer more reliable hardware without losing any information on the legacy systems. On top of this, it allows for much easier backup solutions as everything is on a single server.

127.0.0.1 is the loopback connection on your network interface card (NIC)- pinging this address will see if it is responding. If the ping is successful, then the hardware is good. If it isn’t, then you might have some maintenance in your future. 127.0.0.1 and localhost mean the same thing as far as most functions are concerned, however be careful when using them in situations like web programming as browsers can treat them very differently.

The DHCP server can be setup on a Windows or Linux platform. Multiple scopes can be setup on the DHCP server corresponding to the IP address for the different networks. IP helper address needs to be configured on router for communication between the DHCP clients residing on different networks and the DHCP server.

“A domain local group is a security or distribution group that can contain universal groups, global groups, other domain local groups from its own domain, and accounts from any domain in the forest. You can give domain local security groups rights and permissions on resources that reside only in the same domain where the domain local group is located.

A global group is a group that can be used in its own domain, in member servers and in workstations of the domain, and in trusting domains. In all those locations, you can give a global group rights and permissions and the global group can become a member of local groups. However, a global group can contain user accounts that are only from its own domain.

A universal group is a security or distribution group that contains users, groups, and computers from any domain in its forest as members. You can give universal security groups rights and permissions on resources in any domain in the forest. Universal groups are not supported.”

The twin to TCP is UDP- User Datagram Protocol. Where TCP has a lot of additional under-the-hood features to make sure that everybody stays on the same page, UDP can broadcast ‘into the dark’- not really caring if somebody on the other end is listening (and thus is often called a ‘connectionless’ protocol). As a result, the extra heavy lifting that TCP needs to do in order to create and maintain its connection isn’t required so UDP oftentimes has a faster trmission speed than TCP.

An easy way to picture the differences between these two protocols is like this: TCP is like a CB radio, the person trmitting is always waiting for confirmation from the person on the other end that they received the message. UDP on the other hand is like a standard television broadcast signal. The trmitter doesn’t know or care about the person on the other end, all it does care about is that its signal is going out correctly. UDP is used primarily for ‘small’ bursts of information such as DNS requests where speed matters above nearly everything else. The above listing for TCP also contains counterparts for UDP, so it can be used as a reference for both.

Essentially root is THE admin, but in a Linux environment it is important to remember that unlike in a Windows environment, you spend very little time in a “privileged” mode. Many Windows programs over the years have required that the user be a local admin in order to function properly and have caused huge security issues as a result. This has changed some over the years, but it can still be difficult to remove all of the programs asking for top level permissions. A Linux user remains as a standard user nearly all the time, and only when necessary do they change their permissions to that of root or the superuser (su). sudo (literally- superuser do …) is the main way used to run one-off commands as root, or it is also possible to temporarily have a root-level bash prompt. UAC (User Account Control) is similar in theme to sudo, and like Windows Firewall can be a pain in the neck but it does do a lot of good. Both programs allow the user to engage higher-level permissions without having to log out of their current user session- a massive time saver.

Shadow copies are a versioning system in place on Windows operating systems. This allows for users to go back to a previously available version of a file without the need for restoring the file from a standard backup- although the specific features of shadow copies vary from version to version of the OS. While it is not necessary to use a backup function in conjunction with Shadow Copies, it is recommended due to the additional stability and reliability it provides. Please note- Shadow Copies are not Delta Files. Delta files allow for easy comparison between versions of files, while Shadow Copies store entire previous versions of the files.

Error 5 is very common when dealing with files and directories that have very specific permissions. When trying to copy elements from areas that have restricted permissions, or when trying to copy files to an area that has restricted permissions, you may get this error which basically me “Access denied”. Checking out permissions, making sure that you have the appropriate permissions to both the source and destination locations, and making yourself the owner of those files can help to resolve this issue. Just remember that if you are not intended to be able to view these files to return the permissions back to normal once you are finished.

DNS is the Internet’s phone book. The Domain Name System is what makes it possible to only have to remember something like “cnn.com” instead of (at this particular moment) “157.166.226.26”. IP address change all the time however, although less so for mega-level servers. Human friendly names allow users to remember a something much easier and less likely to change frequently, and DNS makes it possible to map to those new addresses under the hood. If you were to look in a standard phone book and you know the name of the person or business you’re looking for, it will then show you the number for that person. DNS servers do exactly the same thing but with updates on a daily or hourly basis.

The tiered nature of DNS also makes it possible to have repeat queries responded to very quickly, although it may take a few moments to discover where a brand new address is that you haven’t been to before. From your home, say that you wanted to go to the InfoSec Institute’s home page. You know the address for it, so you punch it in and wait. Your computer will first talk to your local DNS server (likely your home router) to see if it knows where it is. If it doesn’t know, it will talk to your ISP’s DNS server and ask it if it knows. If the ISP doesn’t know, it will keep going up the chain asking questions until it reaches one of the 13 Root DNS Servers. The responding DNS server will send the appropriate address back down the pipe, caching it in each location as it does so to make any repeat requests much faster.

Services are programs that run in the background based on a particular system status such as startup. Services exist across nearly all modern operating systems, although vary in their naming conventions depending on the OS- for example, services are referred to as daemons in Unix/Linux-type operating systems. Services also have the ability to set up actions to be done if the program stops or is closed down. In this way, they can be configured to remain running at all times.