Showing posts with label Introduction to Information Technology. Show all posts

Explain briefly, how the domain names are translated to IP addresses?

DNS (Domain Name System) is just as important as fast content. DNS is what translates your familiar domain name (www.google.com) into an IP address your browser can use (173.194.33.174). This system is fundamental to the performance of your webpage, yet most people don’t fully understand how it works. Therefore, in order to help you better understand the availability and performance of your site, we will be publishing a series of blog articles to shed light on the sometimes complex world of DNS, starting with the basics.

Before the page and any resource on the page are loaded, the DNS must be resolved so the browser can establish a TCP connection to make the HTTP request. In addition, for every external resource referenced by a URL, the DNS resolution must complete the same steps (per unique domain) before the request is made over HTTP. The DNS Resolution process starts when the user types a URL address on the browser and hits Enter. At this point, the browser asks the operating system for a specific page, in this case, google.com.





Step 1: OS Recursive Query to DNS Resolver
Since the operating system doesn’t know where “www.google.com” is, it queries a DNS resolver. The query the OS sends to the DNS Resolver has a special flag that tells it is a “recursive query.” This means that the resolver must complete the recursion and the response must be either an IP address or an error.
For most users, their DNS resolver is provided by their Internet Service Provider (ISP), or they are using an open source alternative such as Google DNS (8.8.8.8) or OpenDNS (208.67.222.222). This can be viewed or changed in your network or router settings. At this point, the resolver goes through a process called recursion to convert the domain name into an IP address.
Step 2: DNS Resolver Iterative Query to the Root Server
The resolver starts by querying one of the root DNS servers for the IP of “www.google.com.” This query does not have the recursive flag and therefore is an “iterative query,” meaning its response must be an address, the location of an authoritative name server, or an error. The root is represented in the hidden trailing “.” at the end of the domain name. Typing this extra “.” is not necessary as your browser automatically adds it.
There are 13 root server clusters named A-M with servers in over 380 locations. They are managed by 12 different organizations that report to the Internet Assigned Numbers Authority (IANA), such as Verisign, who controls the A and J clusters. All of the servers are copies of one master server run by IANA.
Step 3: Root Server Response
These root servers hold the locations of all of the top level domains (TLDs) such as .com, .de, .io, and newer generic TLDs such as .camera.
The root doesn’t have the IP info for “www.google.com,” but it knows that .com might know, so it returns the location of the .com servers. The root responds with a list of the 13 locations of the .com gTLD servers, listed as NS or “name server” records.
Step 4:  DNS Resolver Iterative Query to the TLD Server
Next, the resolver queries one of the .com name servers for the location of google.com. Like the Root Servers, each of the TLDs has 4-13 clustered name servers existing in many locations. There are two types of TLDs: country codes (ccTLDs) run by government organizations, and generic (gTLDs). Every gTLD has a different commercial entity responsible for running these servers. In this case, we will be using the gTLD servers controlled by Verisign, who run the .com, .net, .edu, and .gov among gTLDs.
Step 5: TLD Server Response
Each TLD server holds a list of all of the authoritative name servers for each domain in the TLD. For example, each of the 13 .com gTLD servers has a list of all of the name servers for every single .com domain. The .com gTLD server does not have the IP addresses for google.com, but it knows the location of google.com’s name servers. The .com gTLD server responds with a list of all of google.com’s NS records. In this case, Google has four name servers, “ns1.google.com” to “ns4.google.com.”
Step 6: DNS Resolver Iterative Query to the Google.com NS
Finally, the DNS resolver queries one of Google’s name server for the IP of “www.google.com.”
Step 7: Google.com NS Response
This time the queried Name Server knows the IPs and responds with an A or AAAA address record (depending on the query type) for IPv4 and IPv6, respectively.
Step 8: DNS Resolver Response to OS
At this point, the resolver has finished the recursion process and is able to respond to the end user’s operating system with an IP address.
Step 9: Browser Starts TCP Handshake
At this point, the operating system, now in possession of www.google.com’s IP address, provides the IP to the Application (browser), which initiates the TCP connection to start loading the page. For more information on this process, we wrote a blog post on the anatomy of HTTP.
As mentioned earlier, this is worst case scenario in terms of the length of time to complete the resolution. In most cases, if the user has recently accessed URLs of the same domain or other users relying on the same DNS resolver have done such requests, there will be no DNS resolution required, or it will be limited to the query on the local DNS resolver.


Learn more »

What are the characteristics of modern website applications or web apps or software?

The following attributes are encountered in the vast majority of WebApps.

Network intensiveness. A WebApp resides on a network and must serve the needs of a diverse community of clients. The network may enable worldwide access and communication (i.e., the Internet) or more limited access and communication (e.g., a corporate Intranet).

Concurrency. A large number of users may access the WebApp at one
time. In many cases, the patterns of usage among end users will vary greatly.
Unpredictable load. The number of users of the WebApp may vary by
orders of magnitude from day to day. One hundred users may show up on
Monday; 10,000 may use the system on Thursday.

Performance.  If a web app user must wait too long (for access, for server-side processing, for client-side formatting and display), he or she may decide to go elsewhere.

Availability. Although expectation of 100 percent availability is unreasonable, users of popular WebApps often demand access on a 24/7/365 basis. Users in Australia or Asia might demand access during times when traditional domestic software applications in North America might be taken off-line for maintenance.

Data-driven. The primary function of many WebApps is to use hypermedia to present text, graphics, audio, and video content to the end user. In addition, WebApps are commonly used to access information that exists on databases that are not an integral part of the Web-based environment (e.g., e-commerce or financial applications).

Content sensitive. The quality and aesthetic nature of content remain an important determinant of the quality of a WebApp.

Continuous evolution. Unlike conventional application software that evolves over a series of planned, chronologically spaced releases, Web applications evolve continuously. It is not unusual for some WebApps (specifically, their content) to be updated on a minute-by-minute schedule or for content to be independently computed for each request.

Immediacy. Although immediacy—the compelling need to get the software to market quickly—is a characteristic of many application domains, WebApps often exhibit a time-to-market that can be a matter of a few days or weeks.

Security. Because web apps are available via network access, it is difficult, if not impossible, to limit the population of end users who may access the application. In order to protect sensitive content and provide secure modes
Learn more »

Important Full Forms of Computer Terminology

Important Full Forms of Computer Terminology
**********************************************************
1.) GOOGLE : Global Organization Of Oriented Group Language Of Earth .
2.) YAHOO : Yet Another Hierarchical Officious Oracle .
3.) WINDOW : Wide Interactive Network Development for Office work Solution
4.) COMPUTER : Common Oriented Machine Particularly United and used under Technical and Educational Research.
5.) VIRUS : Vital Information Resources Under Siege .
6.) UMTS : Universal Mobile Telecommunications System .
7.) AMOLED: Active-matrix organic light-emitting diode
8.) OLED : Organic light-emitting diode
9.) IMEI: International Mobile Equipment Identity .
10.) ESN: Electronic Serial Number .
11.) UPS: uninterrupted power supply .
12). HDMI: High-DefinitionMultimedia Interface
13.) VPN: virtual private network
14.) APN: Access Point Name
15.) SIM: Subscriber Identity Module
16.) LED: Light emitting diode.
17.) DLNA: Digital Living Network Alliance
18.) RAM: Random access memory.
19.) ROM: Read only memory.
20) VGA: Video Graphics Array
21) QVGA: Quarter Video Graphics Array
22) WVGA: Wide video graphics array.
23) WXGA: Wide screen Extended Graphics Array
24) USB: Universal serial Bus
25) WLAN: Wireless Local Area Network
26.) PPI: Pixels Per Inch
27.) LCD: Liquid Crystal Display.
28.) HSDPA: High speed down-link packet access.
29.) HSUPA: High-Speed Uplink Packet Access
30.) HSPA: High Speed Packet Access
31.) GPRS: General Packet Radio Service
32.) EDGE: Enhanced Data Rates for Global Evolution
33.)NFC: Near field communication
34.) OTG: on-the-go
35.) S-LCD: Super Liquid Crystal Display
36.) O.S: Operating system.
37.) SNS: Social network service
38.) H.S: HOTSPOT
39.) P.O.I: point of interest
40.)GPS: Global Positioning System
41.)DVD: Digital Video Disk / digital versatile disc
42.)DTP: Desk top publishing.
43.) DNSE: Digital natural sound engine .
44.) OVI: Ohio Video Intranet
45.)CDMA: Code Division Multiple Access
46.) WCDMA: Wide-band Code Division Multiple Access
47.)GSM: Global System for Mobile Communications
48.)WI-FI: Wireless Fidelity
49.) DIVX: Digital internet video access.
50.) .APK: authenticated public key.
51.) J2ME: java 2 micro edition
53.) DELL: Digital electronic link library.
54.)ACER: Acquisition Collaboration ExperimentationReflection
55.)RSS: Really simple syndication
56.) TFT: thin film transistor
57.) AMR: Adaptive Multi- Rate
58.) MPEG: moving pictures experts group
59.)IVRS: Interactive Voice Response System
60.) HP: Hewlett Packard
Learn more »

Define the term software.


In general, software can be defined as a collection of computer programs, which in turn is a collection of commands. it is a list of instruction designed to perform certain processing on the input and to produce certain  results.

It is written to handle an input-process-output system to achieve predetermined goals.
Learn more »

What are different types of DBMS?

Different Types of database Management System

The DBMS can be classified according to the number of users and the database site locations. These are:

On the basis of the number of users:
  • Single-user DBMS
  • Multi-user DBMS
 
On the basis of the site location
 
  • Centralized DBMS
  • Parallel DBMS
  • Distributed DBMS
  • Client/server DBMS
The DBMS can be  multi-user or single-user. The configuration of the hardware and also the size of the organization can confirm whether or not it's a multi-user system or one user system.

In single user system the information resides on one laptop and is just accessed by one user at a time. This one user could style, maintain, and write information programs.

Due to great deal of information management most systems square measure multi-user. during this state of affairs the information square measure each integrated and shared. A information is integrated once identical data isn't recorded in 2 places. as an example, each the Library department and also the Account department of the school information may have student addresses. despite the fact that each departments could access completely different parts of the information, the students' addresses ought to solely reside in one place. it's the work of the DBA to form positive that the software package makes the proper addresses out there from one central enclosure.

Centralized Database System

The centralized DBMS system consists of one processor along side its associated knowledge storage devices and alternative peripherals. it's physically confined to one location. knowledge will be accessed from the multiple websites with the employment of a electronic network whereas the info is maintained at the central site.
                   centralized database system consists of a single processor

Disadvantages of Centralized Database System

  • When the central site computer or database system goes down, then every one (users) is blocked from using the system until the system comes back.
  • Communication costs from the terminals to the central site can be expensive.

Parallel Database System

Parallel information system design consists of a multiple Central process Units (CPUs) and information storage disk in parallel. Hence, they improve process and Input/Output (I/O) speeds. Parallel information systems area unit utilized in the appliance that ought to question extraordinarily giant databases or that ought to method a particularly sizable amount of transactions per second.

Advantages of a Parallel Database System

  • Parallel database systems are very useful for the applications that have to query extremely large databases (of the order of terabytes, for example, 1012 bytes) or that have to process an extremely large number of transactions per second (of the order of thousands of transactions per second).
  • In a parallel database system, the throughput (that is, the number of tasks that can be completed in a given time interval) and the response time (that is, the amount of time it takes to complete a single task from the time it is· submitted) are very high.

Disadvantages of a Parallel Database System

  • In a parallel database system, there· is a startup cost associated with initiating a single process and the startup-time may overshadow the actual processing time, affecting speedup adversely.
  • Since process executing in a parallel system often access shared resources, a slowdown may result from interference of each new process as it completes with existing processes for commonly held resources, such as shared data storage disks, system bus and so on.

Distributed Database System

A logically interrelated collection of shared data physically distributed over a computer network is called as distributed database and the software system that permits the management of the distributed database and makes the distribution transparent to users is called as Distributed DBMS.
 
It consists of a single logical database that is split into a number of fragments. Each fragment is stored on one or more computers under the control of a separate DBMS, with the computers connected by a communications network. As shown, in distributed database system, data is spread across a variety of different databases. These are managed by a variety of different DBMS software running on a variety of different operating systems. These machines are spread (or distributed) geographically and connected together by a variety of communication networks.
                         logically interrelated collection of shared data physically distributed over a computer network is called as distributed database

Advantages of Distributed Database System

 
• Distributed database architecture provides greater efficiency and better performance.
• A single database (on server) can be shared across several distinct client (application) systems.
• As data volumes and transaction rates increase, users can grow the system incrementally.
• It causes less impact on ongoing operations when adding new locations.
• Distributed database system provides local autonomy.
 

Disadvantages of Distributed Database System

 
• Recovery from failure is more complex in distributed database systems than in centralized systems.

Client-Server DBMS

Client/Server architecture of database system has two logical components namely client, and server. Clients are generally personal computers or workstations whereas server is large workstations, mini range computer system or a mainframe computer system. The applications and tools of DBMS run on one or more client platforms, while the DBMS soft wares reside on the server. The server computer is caned backend and the client's computer is called front end. These server and client computers are connected into a network. The applications and tools act as clients of the DBMS, making requests for its services. The DBMS, in turn, processes these requests and returns the results to the client(s). Client/Server architecture handles the Graphical User Interface (GUI) and does computations and other programming of interest to the end user. The server handles parts of the job that are common to many clients, for example, database access and updates.

Multi-Tier client server computing models

In a single-tier system the database is centralized, which means the DBMS Software and the data reside in one location and the dumb terminals were used to access the DBMS as shown.
                 single-tier system the database is centralized
The rise of personal computers in businesses during the 1980s, the increased reliability of networking hardware causes Two-tier and Three-tier systems became common. In a two-tier system, different software is required for the server and for the client. Illustrates the two-tier client server model. At the early stages client server computing model was called two-tier-computing model in which client is considered as data capture and validation tier and Server was considered as data storage tier. This scenario is depicted.

Problems of two-tier architecture

 
The need of enterprise scalability challenged this traditional two-tier client-server model. In the mid-1990s, as application became more complex and could be deployed to hundreds or thousands of end-users, the client side, now undergoes with following problems:
                     two-tier client-server model
• A' fat' client requiring considerable resources on client's computer to run effectively. This includes disk space, RAM and CPU.
• Client machines require administration which results overhead.

Three-tier architecture

By 1995, three-tier architecture appears as improvement over two-tier architecture. It has three layers, which are:
 
First Layer: User Interface which runs on end-user's computer (the client) .

Second Layer: Application Server It is a business logic and data processing layer. This middle tier runs on a server which is called as Application Server.

Third Layer: Database Server It is a DBMS, which stores the data required by the middle tier. This tier may run on a separate server called the database server.
As, described earlier, the client is now responsible for application's user interface, thus it requires less computational resources now clients are called as 'thin client' and it requires less maintenance.

Advantages of Client/Server Database System

  1. Client/Server system has less expensive platforms to support applications that had previously been running only on large and expensive mini or mainframe computers
  2. Client offer icon-based menu-driven interface, which is superior to the traditional command-line, dumb terminal interface typical of mini and mainframe computer systems.
  3. Client/Server environment facilitates in more productive work by the users and making better use of existing data.Client/Server database system is more flexible as compared to the Centralized system
  4. Client/Server database system is more flexible as compared to the Centralized system.
  5. Response time and throughput is high.
  6. The server (database) machine can be custom-built (tailored) to the DBMS function and thus can provide a better DBMS performance.
  7. The client (application database) might be a personnel workstation, tailored to the needs of the end users and thus able to provide better interfaces, high availability, faster responses and overall improved ease of use to the user. + A single database (on server) can be shared across several distinct client (application) systems.

Disadvantages of Client/Server Database System

  1. Programming cost is high in client/server environments, particularly in initial phases.
  2. There is a lack of management tools for diagnosis, performance monitoring and tuning and security control, for the DBMS, client and operating systems and networking environments.
Learn more »

Why is C programming very popular?

C programming is very popular because of the following factors:

  •  C is ubiquitous. C is available for all the platform, in most of the cases.
  •  C is portable.  If you write a piece of clean C, and it compiles with minimal modifications on other platforms - sometimes it even works out-of-the-box.
  • C is still the default language for UNIX and UNIX-like systems. If you want a library to succeed in open-source land, you need fairly good reasons not to use C. This is partially due to tradition, but more because C is the only language you can safely assume to be supported on any UNIX-like system. Writing your library in C means you can minimize dependencies.
  • C is simple. It lacks the expressivity of sophisticated OOP or functional languages, but its simplicity means it can be picked up quickly.
  • C is versatile. It is suitable for embedded systems, device drivers, OS kernels, small command-line utilities, large desktop applications, DBMS's, implementing other programming languages, and pretty much anything else you can think of.
  • C is fast. Most C implementations compile directly to machine code, and the programmer has full power over what happens at the machine level. There is no interpreter, no JIT compiler, no VM or runtime - just the code, a compiler, a linker, and the bare metal.
  • C is 'free' . There is no single company that owns and controls the standard, there are several implementations to choose from, there are no copyright, patenting or trademark issues for using C, and some of the best implementations are open-source.
  • C has a lot of momentum going. The is an huge amount of applications, libraries, tools, and most of all, communities, to support the language.
  • C is mature. The last standard that introduced big changes is C99, and it is mostly backwards-compatible with previous standards. Unlike newer languages , we don't have to worry about breaking changes anytime soon.
  •  C has been around for a while. Back in the days when UNIX conquered the world, C (the UNIX programming language of choice) shared in its world domination, and became the linuga franca of the programming world. Any serious programmer can be expected to at least make some sense of a chunk of C; the same can't be said about most other languages.
  • C is base programming language: Many other programming languages are the offshoots of C programming language.

Learn more »

What are the advantages and disadvantages of using assembly language?






Advantages


  1. Debugging and verifying. Looking at compiler-generated assembly code or the disassembly window in a debugger is useful for finding errors and for checking how well a compiler optimizes a particular piece of code.

  2. Makingcompilers.Understandingassemblycodingtechniquesisnecessaryfor making compilers, debuggers and other development tools.

  3. Embedded systems. Small embedded systems have fewer resources than PCs and mainframes. Assembly programming can be necessary for optimizing code for speed or size in small embedded systems.

  4. Hardware drivers and system code. Accessing hardware, system control regis- ters etc. may sometimes be difficult or impossible with high level code.

  5. Accessing instructions that are not accessible from high-level language. Certain assembly instructions have no high-level language equivalent.

  6. Self-modifyingcode.Self-modifyingcodeisgenerallynotprofitablebecauseit interferes with efficient code caching. It may, however, be advantageous, for example, to include a small compiler in math programs where a user-defined function has to be calculated many times.

  7. Optimizingcodeforsize.Storagespaceandmemoryissocheapnowadaysthatit is not worth the effort to use assembly language for reducing code size. However, cache size is still such a critical resource that it may be useful in some cases to op- timize a critical piece of code for size in order to make it fit into the code cache.









8. Optimizing code for speed. Modern C++ compilers generally optimize code quite well in most cases. But there are still cases where compilers perform poorly and where dramatic increases in speed can be achieved by careful as- sembly programming.

9. Function libraries. The total benefit of optimizing code is higher in function li- braries that are used by many programmers.

10. Making function libraries compatible with multiple compilers and operating systems. It is possible to make library functions with multiple entries that are compatible with different compilers and different operating systems. This re- quires assembly programming.


DISADVANTAGES

The disadvantages of using an assembly language rather than an HLL include the following





1. Development time. Writing code in assembly language takes much longer than writing in a high-level language. 


2. Reliability and security. It is easy to make errors in assembly code. The assembler is not checking if the calling conventions and register save conventions are obeyed. Nobody is checking for you if the number of PUSH and POP instructions





is the same in all possible branches and paths. There are so many possibilities for hidden errors in assembly code that it affects the reliability and security of the pro- ject unless you have a very systematic approach to testing and verifying.

  1. Debugging and verifying. Assembly code is more difficult to debug and verify because there are more possibilities for errors than in high-level code.

  2. Maintainability. Assembly code is more difficult to modify and maintain be- cause the language allows unstructured spaghetti code and all kinds of tricks that are difficult for others to understand. Thorough documentation and a consistent programming style are needed.

  3. Portability. Assembly code is platform-specific. Porting to a different platform is difficult.

  4. System code can use intrinsic functions instead of assembly. The best modern C++ compilers have intrinsic functions for accessing system control registers and other system instructions. Assembly code is no longer needed for device drivers and other system code when intrinsic functions are available.

Learn more »

What are the differences among EPROM, EEPROM, and flash memory?


  1. EPROM is read and written electrically; before a write operation, all the storage cells must be erased to the same initial state by exposure of the packaged chip to ultraviolet radiation. Erasure is performed by shining an intense ultraviolet light through a window that is designed into the memory chip.  

    EEPROM is a read- mostly memory that can be written into at any time without erasing prior contents; only the byte or bytes addressed are updated. 

     Flash memory is intermediate between EPROM and EEPROM in both cost and functionality. Like EEPROM, flash memory uses an electrical erasing technology. An entire flash memory can be erased in one or a few seconds, which is much faster than EPROM. In addition, it is possible to erase just blocks of memory rather than an entire chip. However, flash memory does not provide byte-level erasure. Like EPROM, flash memory uses only one transistor per bit, and so achieves the high density (compared with EEPROM) of EPROM.
Learn more »

Differenciate between RISC and CISC.

CISC Computer RISC Computer
The acronym is variously used.

If it reads as above (i.e. as CISC computer), it means a computer that has a Complex Instruction Set Chip as its cpu.

It is also referred to as CISC computing.

It is sometimes called a CISC “chip”. This could have a tautology in the last two words, but it can be overcome by thinking of it as a Complex Instruction Set Computer chip.
The acronym is variously used.

If it reads as above (i.e. as RISC computer), it means a computer that has a Reduced Instruction Set Chip as its cpu.

It is also referred to as RISC computing.

It is sometimes called a RISC “chip”. This could have a tautology in the last two words, but it can be overcome by thinking of it as Reduced Instruction Set Computer chip.
CISC chips have an increasing number of components and an ever increasing instruction set and so are always slower and less powerful at executing “common” instructions RISC chips have fewer components and a smaller instruction set, allowing faster accessing of “common” instructions
CISC chips execute an instruction in two to ten machine cycles RISC chips execute an instruction in one machine cycle
CISC chips do all of the processing themselves RISC chips distribute some of their processing to other chips
CISC chips are more common in computers that have a wider range of instructions to execute RISC chips are finding their way into components that need faster processing of a limited number of instructions, such as printers and games machines

More..

Difference Between RISC and CISC Computer

Examples

 Computational Difference

Execution Time of RISC and CISC


Learn more »

What is assembly language? What kinds of statements are present in an assembly language program? Discuss. Also highlight the advantages of assembly language.


What is assembly language?

Assembly language is a family of low-level language for programming computers, microprocessors, microcontrollers etc. They implement a symbolic representation of the numeric machine codes and other constants needed to program a particular CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on abbreviations (called mnemonic) that help the programmer remember individual instruction, register etc. Assembly language programming is writing machine instructions in mnemonic form, using an assembler to convert these mnemonics into actual processor instructions and associated data.

Kinds of statements in assembly language

An assembly program contains following three kinds of statements:
1. Imperative statements: These indicate an action to be performed during execution of the assembled program. Each imperative statement typically translates into one machine instruction.
2.
Declaration statements: The syntax of declaration statements is as follows:
[Label] DS<constant>
[Label] DC ‘<value>’
The DS statement reserves areas of memory and associates names with them. The DC statement constructs memory words containing constants.
3.
Assembler directives: These instruct the assembler to perform certain actions during the assembly of a program. For example
START <constant> directive indicates that the first word of the target program generated by the assembler should be placed in the memory word with address <constant>.
 

Advantages

The advantages of assembly language program would be
  • reduced errors
  • faster translation times
  • changes could be made easier and faster

Learn more »

What do you mean by HTTP? Explain its headers.


The Hypertext Transfer Protocol (HTTP) is a protocol used mainly to access data on the World Wide Web. The protocol transfer all data in the form of plain text, hypertext, audio, video, and so on. However it is called the hypertext transfer protocol because its efficiency allows its use in a hypertext environment where there are rapid jumps from one document to another.
HTTP functions like a combination of FTP and SMTP. It is similar to FTP because it transfers files and uses the services of TCP. However, it is much simpler than FTP because it uses only data are transferred between the client and the server.
HTTP is like SMTP because the data transferred between the client and server look like SMTP messages. In addition, the format of the messages is controlled by MIME-like headers.
However, HTTP differs from SMTP in the way the messages are sent from the client to the server and from the server to the client. Unlike SMTP, the HTTP messages are not destined to be read by humans; they are read and interpreted by the HTTP server and HTTP client (browser). SMTP messages are stored and forwarded, but HTTP messages are delivered immediately.
The idea of HTTP is very simple. A client sends a request, which looks like mail, to the server. The server sends the response, which looks like a mail reply, to the client. The request and response messages carry data in the form of a letter with MIME-like format.
The commands from the client to the server are embedded in a letter like request message. The contents of the requested file or other information are embedded in a letter like response message.

Fig: HTTP
HTTP Transaction
Figure illustrates the HTTP transaction between the client and server. The client initializes the transaction by sending a request message. The server replies by sending a response.

Messages
There are two general types of HTTP messages, shown in figure request and response. Both message types follow almost the same format.

Request Messages
A request message consists of a request line, headers, and sometimes a body.

 Response Message
            A response message consists of a status line, headers, and sometimes a body.
Learn more »

What do you mean by FTP? Expalin in detail.


File transfer protocol (FTP) is the standard mechanism provided by TCP/IP for copying a file from one host to another. Transferring files from one computer to another is one of the most common tasks expected from a networking or internetworking environment.
Fig: FTP
Although transferring files from one system to another seems simple and straight-forward. Some problems must be dealt with first. For example, two systems may use different file name conventions. Two systems may have different ways to represent text and data. Two systems may have different directory structures. All of these problems have been solved by FTP in a very simple and elegant approach.
FTP differs from other client-server applications in that it establishes two connections between the hosts. One connection is used for data transfer, the other for control information (commands and responses). Separation of commands and data transfer makes FTP more efficient. The control connection uses very simple rules of communication. On the other hand, needs more complex rules due to the variety of data types transferred.

The client has three components:

·      The user interface
·      The client control process
·      The client data transfer process

The server has two components:

·      The server control process
·      Server data transfer process
The control connection is made between the control processes. The data connection is made between the data transfer processes.
The control connection remains connected during the entire interactive FTP session. The data connection is opened and then closed for each file transferred. It opens each time commands that involve transferring files are used, and it closes when the file is transferred. The two FTP connections, control and data, use different strategies and different port numbers.
Learn more »

What do you mean by Network architecture? Expalin about layering and protocols.


NETWORK ARCHITECTURE
A computer network must provide general, cost effective, fair and robust among a large number of computers. It must evolve to accommodate changes in both the underlying technologies. To help to deal this network designers have developed general blueprints called network architecture that guide the design and implementation of networks.
LAYERING AND PROTOCOL
To reduce the complexity of getting all the functions maintained by one a new technique called layering technology was introduced. In this, the architecture contains several layers and each layer is responsible for certain functions. The general idea is that the services offered by underlying hardware, and then add a sequence of layers, each providing a higher level of service. The services provided at the higher layers are implemented in terms of the services provided by the lower layers. A simple network has two layers of abstraction sandwiched between the application program and the underlying hardware. 


The layer immediately above the hardware in this case might provide host to host connectivity, and the layer above it builds on the available host to host communication service and provides support for process to process channels.
            Features of layering are: 1. It decomposes the problem of building a network into more manageable components. 2. It provides a more modular design. Addition of new services and modifications are easy to implement.
            In process to process channels, they have two types of channels. One for request\reply service and the other for message stream service.
            A protocol provides a communication service that higher level objects use to exchange message. Each protocol defines two different interfaces. First it defines a service interface to other objects on the same system that want to use its communication services. This interface defines the operations that local objects can perform on the protocol. Second a protocol defines a peer interface to its counterpart on another machine. It defines the form and meaning of message exchanged between protocol peers to implement the communication service.
 
            There are potentially multiple protocols at any given level, each providing a different communication service. It is known as protocol graph that make up a system.
Learn more »

What are different types of network topologies? Expain the advantages and disadvantages of each topology with diagram.

Topology refers to the way a network is laid out either physically or logically. Two or more devices connect to a link; two or more links form a topology. It is the geographical representation of the relationship of all the links and linking devices to each other.

1.     Mesh
2.     Star
3.     Tree
4.     Bus
5.     Ring

1. Mesh Topology:

Here every device has a dedicated point to point link to every other device. A fully connected mesh can have n(n-1)/2 physical channels to link n devices. It must have n-1 IO ports.

Figure: Mesh Topology

Advantages:
  1. They use dedicated links so each link can only carry its own data load. So traffic problem can be avoided.
  2. It is robust. If any one link get damaged it cannot affect others
  3. It gives privacy and security
  4. Fault identification and fault isolation are easy.

Disadvantages:
  1. The amount of cabling and the number IO ports required are very large. Since every device is connected to each other devices through dedicated links.
  2. The sheer bulk of wiring is larger then the available space
  3. Hardware required to connect each device is highly expensive.

Example:
A mesh network has 8 devices. Calculate total number of cable links and IO ports needed.
Solution:
Number of devices = 8
Number of links     = n (n-1)/2
       = 8(8-1)/2
       = 28
Number of port/device = n-1
 = 8-1 = 7

2. STAR TOPOLOGY:


Here each device has a dedicated link to the central ‘hub’. There is no direct traffic between devices. The transmission are occurred only through the central controller namely hub.

Figure: Star Topology

Advantages:
  1. Less expensive then mesh since each device is connected only to the hub.
  2. Installation and configuration are easy.
  3. Less cabling is need then mesh.
  4. Robustness.
  5. Easy to fault identification & isolation.

Disadvantages:
  1. Even it requires less cabling then mesh when compared with other topologies it still large.

TREE TOPOLOGY:

It is a variation of star. Instead of all devices connected to a central hub here most of the devices are connected to a secondary hub that in turn connected with central hub. The central hub is an active hub. An active hub contains a repeater, which regenerate the received bit pattern before sending.

Figure: Tree Topology

The secondary hub may be active or passive. A passive hub means it just precedes a physical connection only.

Advantages:
  1. Can connect more than star.
  2. The distance can be increased.
  3. Can isolate and prioritize communication between different computers.

4. BUS TOPOLOGY:

A bus topology is multipoint. Here one long cable is act as a backbone to link all the devices are connected to the backbone by drop lines and taps. A drop line is the connection between the devices and the cable. A tap is the splice into the main cable or puncture the sheathing.
Figure: Bus Topology
Advantages:
  1. Ease of installation.
  2. Less cabling.

Disadvantages:
  1. Difficult reconfiguration and fault isolation.
  2. Difficult to add new devices.
  3. Signal reflection at top can degradation in quality
  4. If any fault in backbone can stops all transmission.

5. RING TOPOLOGY:

Here each device has a dedicated connection with two devices on either side of it. The signal is passed in one direction from device to device until it reaches the destination and each device have repeater.
Figure: Ring Topology

Advantages:
  1. Easy to install.
  2. Easy to reconfigure.
  3. Fault identification is easy.
Disadvantages:
  1. Unidirectional traffic.
  2. Break in a single ring can break entire network.
Learn more »