Showing posts with label ICT & COMPUTING. Show all posts
Showing posts with label ICT & COMPUTING. Show all posts

Tuesday, January 22, 2019

The Definition and Evolution of the Internet

The Internet also referred to as the net, in simplest terms, consists of large a group of millions of computers around the world that are connected to one another. It is a network of networks that consists of millions of private, public, academic, business, and government networks, local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies such as phone lines, fibre optic lines, coaxial cable, satellites, and wireless connections.
The Internet seems to be everywhere today with many people and devices connected to it. When connected to the Internet people can access services such as online shopping, listen to radio and TV broadcast, chat, and send mail, access information, read newspaper and so on. Today Internet is not only accessed from regular stationary computer but also from mobile / portable devices such as Personal Digital Assistants (PDAs), cell phones, netbook, iPod, iPad, Palm Pilots and others.
The Internet originated as a proposal from the Advanced Research Project Agency (ARPA). The idea was to see how computers connected in a network i.e. (ARPANET) could be used to access information from research facilities and universities. In 1969, four computers (located at UCLA, Stanford Research Institute, University of California Santa Barbara and the University of Utah) were successfully connected. As time went on, other networks were connected. With four nodes by the end of 1969, the ARPANET spanned the continental United States (US) by 1971 and had connections to Europe by 1973. Though the Interconnected Network, or Internet, was originally limited to the military, government, research, and educational purposes it waseventually opened to the public. Today there are hundreds of millions of computers and other devices connected to the Internet worldwide. Other definitions that are closely related to the term Internet are intranet and extranet.

Intranet
The term “Intranet” is used to describe a network of personal computers(PC) without any personal computers on the network connected to the world outside of the Intranet. The Intranet resides behind a firewall; if it allows access from the Internet, it becomes an Extranet. The firewall helps to control access between the intranet and Internet so that only authorised users will have access to the Intranet. Usually these people are members of the same company or organisation. Like the Internet itself, intranets are used to share information. Secure intranets are now the fastest-growing segment of the Internet because they are much less expensive to build and manage than private network based on proprietary protocols.
Extranet
Extranets are becoming a very popular means for business partners to exchange information. An Extranet is a term used to refer to an intranet that is partially accessible to authorised outsiders. Privacy and security are important issues in extranet use. A firewall is usually provided to help control access between the Intranet and Internet. In this case, the actual server will reside behind a firewall. The level of access can be set to different levels for individuals or groups of outside users.
Internet Access
In order to have access to the vast resources on the Internet, you need to connect your computer to a computer system that is already on the Internet, usually one run by an Internet Service Provider (ISP). There are four major ways of connecting a client (user) computer to the vast resources on the Internet; these are by a dial-up connection using a telephone line or an Integrated Services Digital Network (ISDN), a Digital Subscriber Line (DSL), a cable TV connection or a satellite connection. While rural users may consider installing a satellite dish for Internet connections, urban users may have access to wireless connections. In most offices, users connect their computers via a localarea network (LAN) connected to the Internet. Similarly, in many home, users are beginning to connect their computers into Internet-connected LANs, too. The Dial-up access gives a low speed connection to the Internet. High-speed Internet connections, which include DSL, ISDN, leased lines, cable Internet, and satellite, are called broadband connections
Dial-up Connection
Dial-up Internet access is a form of Internet access that uses the facilities of the public switched telephone network (PSTN) to establish a dialed connection to an Internet service provider (ISP) via telephone lines. The user’s computer or router uses an attached modem to encode and decode Internet Protocol packets and control information into and from analog audio frequency signals, respectively. The term “Dial-up Internet access” was coined during the early days of computer telecommunications when modems were needed to connect terminals or computers running terminal emulator software to mainframes, minicomputers, online services and bulletin board systems via a telephone line. To use a dial-up account, you need a modem.
A modem (modulator-demodulator) is a device that modulates an analogcarrier signal to encode digital information, and demodulates such a carrier signal to decode the transmitted information. To distinguish dial up modems from newer, high-speed modems, they are could also be called analog modems or dial-up modems. Most computers come with an internal modem and most ISPs support modems at speeds of 28.8 kilobits per second (Kbps) and 56 Kbps. With dial-up, you connect only when you want to use Internet services and disconnect (hang up) when you are done. This type of data transmission is similar to using thetelephone to make a call. The client computer modem dials the preprogrammed phone number for a user’s Internet Service Provider (ISP) and connects to one of the ISP’s modems. Once the ISP has verified the user's account, a connection is established and data can be transmitted. The communication ends when either modem hangs up.
Dial-up connections is not expensive (it costs no more than a local telephone call) but the speed is usually low at about 28kps – 46kps because of the limitations of analog phone lines and telephone company switches.
ISDN
Integrated Services Digital Network (ISDN) is a set of communications standards for simultaneous digital transmission of voice, video, data, and other network services over the traditional circuits of the publicswitched telephone network. It allows dial up into the Internet at speeds ranging from 64 to 128 kbps. For this connection to be available, telephone companies would have to install special ISDN digital switching equipment. The ISDN service intended for residential use is Basic Rate Interface (BRI). On one ISDN line, BRI provides two 64-Kbps channels, or B (bearer) channels, and one 16-Kbps channel, or D(data) channel. The D channel is mostly used for signalling such as to indicate that the line is busy. The B channels are where the action is. Two B channels can be combined to have a 128-Kbps line to theInternet. This is roughly twice the speed of the fastest analogue modem,56 Kbps. To connect to your ISP via ISDN you need to confirm the availability of the access and this will require you to have an ISDN adapter. ISDN lines are more expensive than normal phone lines, so the telephone rates are usually higher.
Cable TV Connection
This is a connection made to the Internet via a Cable TV modem. The modem is designed to operate over cable TV lines. Since the coaxial cable used by cable TV provides much greater bandwidth than telephone lines, a cable modem can be used to achieve extremely fast speed as high as 128 kbps to 10 mbps to the World Wide Web. This combined with the fact that millions of homes are already wired for cable TV in developed countries has made the cable modem something of a holy grail for Internet and cable TV companies. The services offered are usually at low cost for unlimited, “always connected” access. However, there are a number of technical difficulties in this type of connection. The problem is that the cable network was designed to move information in one direction, from the broadcaster to the user.
Downstream speeds have been very impressive such that the line can theoretically bring you data as fast as 30 Mbps but upstream speed depends on line quality. The Internet, however, is a two-way system where data also need to flow from the client to the server. In addition, it is still unknown whether the cable TV networks can handle the traffic that would ensue if millions of users began using the system for Internet access. Large cable companies are spending money to upgrade their networks to Hybrid Fiber-Coaxial (HFC) to handle two way traffic better. Smaller providers cannot afford the upgrade, so they have to use a phone line at 28.8 Kbps for upstream data. Another issue bothers on security and the need to either share or not share files amongst users.
DSL (Digital Subscriber Line)
Digital Subscriber Line (DSL) is a family of technologies that provides digital data transmission over the wires of a local telephone network. DSL service is delivered simultaneously with regular telephone on the same telephone line. DSL uses a different part of the frequency spectrum from analogue voice signals, so it can work in conjunction with a standard analogue telephone service, providing separate voice and data “channels” on the same line.
SDSL (Symmetric DSL) is the type of DSL that offers the same bandwidth capability in both directions while ADSL (Asymmetric DSL) is the type of DSL that provides different bandwidths in the upgue telephone service, providing separate voice and data “channels” on the same line. SDSL (Symmetric DSL) is the type of DSL that offers the same bandwidth capability in both directions while ADSL (Asymmetric DSL) is the type of DSL that provides different bandwidths in the upstream and downstream directions. Most DSL lines are actually ADSL (Asymmetric DigitalSubscriber Line). ADSL is optimised for the way many people use the
Internet: more downloads than uploads. The line is asymmetric, because it has more capacity for data received by your computer (such as graphics, video, audio, and software upgrades) than for data that you send (such as e-mail and browser commands). The data through put of  consumer DSL services typically range from 256 kbit/s to 40 Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e. in the direction to the service provider) is lower, hence the designation of asymmetric service. In Symmetric Digital Subscriber Line (SDSL) service, the downstream and upstream data rates are equal. Unlike cable modem technology, DSL provides a point-to-point connection to ISP. Somehow, this technology seems to be both more secure and less prone to local traffic fluctuations than its cable rival.
Digital Satellite Connection
Digital Satellite Systems (DSS), or direct broadcast satellite, allows one to get Internet information via satellite. Satellite Internet systems are an excellent, although rather costly, option for people in rural areas where Digital Subscriber Line (DSL) and cable modem connections are not available. A satellite installation can be used even where the most basic utilities may be lacking, if there is a generator or battery power supply that can produce enough electricity to run a desktop computer system. The two-way satellite Internet option offers an always-on connection that bypasses the dial-up process. In a two-way satellite Internet connection, the upstream data is usually sent at a slower speed than the downstream data arrives. Thus, the connection is asymmetric. A dish antenna, measuring about two feet high by three feet wide by three feet deep, transmits and receives signals. Uplink speeds are nominally 50 to 150 Kbps for a subscriber using a single computer. The downlink occurs at speeds ranging from about 150 Kbps to more than 1200 Kbps, depending on factors such as Internet traffic, the capacity of the server.The main advantage of the Satellite technology over cable modems and DSL is accessibility. Satellite connections are faster than dial up and ISDN. Although it is not as fast as cable modems or DSL services, which both can provide more than megabits of bandwidth. In addition, cable and DSL access methods are cheaper.Equipment required for satellite connection includes installation of a mini-dish satellite receiver and a satellite modem. Satellite systems are also prone to rain fade (degradation during heavy precipitation) and occasional brief periods of solar interference.
SUMMARY & CONCLUSION
The Internet has remained a dominant means of communication over the past decade. It represents one of the most remarkable developments in the technological history of the world. It began as a medium for exchanging files by academia and has become a nearly ubiquitous phenomenon that has transformed almost every aspect of daily life. The Internet has made information available in a quick and easy manner, publicly accessible and within easy reach via the connections infrastructure discussed in this unit. In the next unit, we shall look at some of the services available on the Internet and the enabling protocols. The general rule about the Internet connection is “the faster, the better.”
The bandwidth and transfer rate determine how quickly pictures, sounds, animation and video clips will be downloaded. Since multimedia and interactivity make the Internet such an exciting tool for information sharing, the speed is the key. Dial-up access provides an easy and inexpensive way for users to connect to the Internet, however, it is a slow-speed technology and most users are no longer satisfied with dial up or ISDN connections. Fortunately, the broadband access, we once dreamed of, is now possible with TV cable, DSL and satellite links.

REFERENCES
  • Behrouz, A. F. (2003). Data Communications and Networking. (3rd ed.). N.Y, USA: McGraw-Hill/Osborne.
  • Brian, et al.(1999). Using Information Technology: A Practical Introduction to Computers and Communication. Irwin/McGraw Hill.
  • Terry, F-M. (2009). Web Development and Design Foundations with XHTML. USA: Pearson .
  • Katarzyna J. M. “Internet and You: Connecting to the Internet.”
  • L. & Mathew, L. (1999). Fundamentals of Information Technology. New Delhi: Vikas Publishing House PVT LTD.
  • Andy, S. (1999).Computer Communications, Principles and Business Applications. (2nd ed.). England: McGraw-Hill.

Monday, January 21, 2019

How to Create Console Assemblies


Console applications are the kind of executables that you would use to create a command line utility. The command line utility could be dir or xcopy. Actually many ad-hoc C# projects are console applications. The C# source code shown in Figure 1 will calculate how many rabbits you get after a certain number of generations (assuming ideal circumstances, of course).

Figure 1 above is an example of a console application. If you build this application you can run it from the command line, and pass it an argument indicating the number of generations you would like to calculate.
Building a Console Application File
To build this file using the command line compiler you would use the following command.
C:\>csc Rabbit.cs
You will notice that it is not necessary to use the /Target switch like it was when we built the source code. This is because the default assembly built by the C# compiler is a console executable.This is how you build a console application; simple stuff!
However, the Rabbits.cs sample also shows some interesting things about C# in general, so we will highlight them hereLike our last application, Rabbits.cs defines a class (arbitrarily named App) which defines an entry point function named Main().
The Main() method in Rabbits.cs does something a little different by taking a parameter defined as an array of String objects. These are the command line arguments passed to the program.
Rabbits.cs uses structured exception handling to handle errors. This is the try-catch code block that surrounds the bulk of the application. You will notice from this sample that C# uses C syntax for its loop constructs. The while loop syntax in Rabbits.cs, is exactly what it would be in C, C++, or Java.
The sample code in Figure 1 uses the static WriteLine() method of the Console type defined in the Framework Class Library to output text to the console window. Notice that an instance of the Console type was not necessary to call the method. This is because WriteLine() is defined as a static method.
This sample uses a numeric type called Decimal which allows us to calculate many more generations than we would be able to do with a 32-bit integer value. The Decimal type is well suited for financial and scientific applications.
Rabbits.cs is an iterative Fibonacci calculator. Its’ written this way just for fun. There is nothing about C# that would prohibit you from implementing this algorithm using recursion.
Building and Running a Console Application
1. The source code in Figure 1 Rabbits.cs is a console Fibonacci number generator written in C#.
2. Type in or copy the source code and save it in a .cs file.
3. Compile the source code using the command line compiler.
a. Hint: The line that you would use to compile this application is as follows.
c:\>csc Rabbit.cs
4. Run the new executable.
SUMMARY & CONCLUSION
In this treatise, we learnt that Console applications are the kind of executables that you would use to create a command line utility. The command line utility could be dir or xcopy. We equally identified the steps required to build and run a console application. We have considered the concept of console application, the types of command line utility and a classic example of a console application. We equally identified the steps required to build and run a console application.
REFERENCES
  • Microsoft. September 4, 2003."Visual C#.net Standard" (JPEG).
  • Microsoft (June 18, 2009) C# Language Reference.
    10.O' Reilly. (2002).C# Language Pocket Reference. ISBN 0-596-00429-X.
  • Petzold, Charles (2002). Programming Microsoft Windows with C#. Microsoft Press. ISBN 0-7356-1370-2.
  • Steinman, Justin (November 7, 2006). "Novell Answers Questions from the Community".
  • Stallman, Richard (June 26, 2009). "Why free software shouldn't depend on Mono or C#".
  • Simon, Raphael; Stapf, Emmanuel; Meyer, Bertrand (June 2002). "Full Eiffel on the .NET Framework".
  • "The ECMA C# and CLI Standards". 2009-07-06
  • Archer, Tom (2001). Inside C#. Microsoft Press. ISBN 0-7356-1288-9.
  • Cornelius, Barry (December 1, 2005). "Java 5 catches up with C#". University of Oxford Computing Services.
  • Ecma International. June 2006.C# Language Specification (4th ed.).
    Guthrie, Scott (November 28, 2006). "What language was ASP.Net originally written in?"
  • Hamilton, Naomi (October 1, 2008). "The A-Z of Programming Languages: C#". Computerworld.
  • Horton, Anson (2006-09-11). "C# XML documentation comments FAQ".
  • Kovacs, James (September 07, 2007). "C#/.NET History Lesson".

Internet Users Increased from 108.4 to 111.6 Billion in Nigeria

The data showed that overall internet users increased to 111,632,516 in December 2018 from the 108,457,051 recorded in November 2018, showing an increase of 3,175,465 new subscribers.
Christian 24 Library Image_ICT_0008

According to the data, Airtel, MTN and Globacom gained more internet subscribers during the month under review, while 9mobile was the biggest loser.

The breakdown revealed that MTN gained the more with 2,221,153 new internet users in December 2018, increasing its subscription to 43,899,957 as against 41,678,804 in November 2018.

Internet users in Nigeria increased marginally to more than 111.6 million in December 2018, the Nigerian Communications Commission has said. The NCC made this known on Monday in its Monthly Internet Subscribers Data for December posted on its website.

Saturday, January 12, 2019

The Overview of XML

XML permits documents authors to create markup (that is, text-based notation for describing data) for virtually any type of information. This enables document authors to create entirely new markup languages for describing any type of data, such as mathematical formula, software configuration instructions, chemical molecular structures, music, new recipes and financial reports.
XML describes data in a way that both human beings and computer can understand. XML is not a replacement for HTML. HTML is about displaying information, while XML is about carrying information. XML uses tags to structure data. The tags are not predefined- every developer is expected to define his/her tags. XML is designed to be self-descriptive. Tags are markup construct that begins with "<" and ends with ">". Tags come in three flavours: start-tags, for example <section>, end-tags, for example </section>, and empty-element tags, for example <line-break />. An element’s start and end tags enclose text that represents a piece of data.
Every XML document must have exactly one root element that contains all the other elements. XML documents may begin by declaring some information about themselves, as in the following example.
<?xml version="1.0" encoding=" ISO-8859-1" ?>
Now let us take a look at this simple XML code below:
Example 1:
<?xml version="1.0" encoding="ISO-8859-1"?>
<MyPersonalDetails>
<FullName>
<FirstName>Bobwalej</FirstName>
<LastName>Ade</LastName>
</FullName>
<BirthDate>
<Month>June</Month>
<Date>17</Date>
<Year>1970</Year>
</BirthDate><MailingAddress>
<Christian 24 >Christian 24 Library>
</Online Community>
</MailingAddress>
</MyPersonalDetails>
From the codes above, XML did nothing at all. It is just information wrapped in tags. Someone must write a piece of software to send, receive or display it. The first line of code tells the version and character encoding being used by this XML document. The second line of code tells what kind of information or XML document. The XML applications that will use the codes in example 1, will looked at the root or parent tag in the XML document.
Here, it is <MyPersonalDetails >, which is not defined by XML. XML allows authors to create their own XML tag to be used in each document. XML, like any other languages, is capable of having two or more child tags or commonly known as nested tags. The <FullName> tag has three child tags, so on and so forth.
Also, XML tags are case sensitive. Meaning we cannot declare
< MyPersonalDetails > opening tag with a closing tag of
</myPersonalDetails >. Opening and closing tags must be written with
the same case:
Creating and Modifying XML Documents
XML allows one to describe data precisely in a well-formed format. XML document are highly portable. Any text editor such notepad of software that supports ASCII/Unicode characters can open XML documents for viewing and editing. An XML document is created by typing XML codes into a text editor and then save the document with a filename and a .xml extension. Most Web browsers can display XML documents in a formatted manner that shows the XML’s structure.
Processing XML Documents
To process an XML document, you would need an XML parser (or XML processor). A parser is software that checks that the document follows the syntax rules specified by the W3C’s XML recommendation and makes the document’s data available to application. A parser would for example check an XML document to ensure that there is a single root element, a start tag for each element, and properly nested tags (that is, the end tag for a nested element must appear before the end tag of the enclosing element).
Furthermore, XML is case sensitive, so the proper capitalisation must be used in elements as in Example 1. A document that conforms to this syntax issaid to be a well-formed XML document and is syntactically correct. If an XML parser can process an XML document successfully, that XML document is well-formed. Parsers can provide access to XML-encoded data in well-formed document only. Often XML parsers are built into software or available for download over the Internet. Examples of parser include Microsoft XML Core Services (MSXML), Xerces Expat and so on.
Validating XML Documents
In addition to being well formed, an XML document may be valid. This means that it contains a reference to a Document Type Definition (DTD) and that its elements and attributes are declared in that DTD and follows the grammatical rules for them that the DTD specifies. A DTD is an example of a schema or grammar.
Since the initial publication of XML 1.0, there has been substantial work in the area of schema languages for XML. Such schema languages typically constrain the set of elements that may be used in a document, which attributes may be applied to them, the order in which they may appear, and the allowable parent/child relationships. When an XML document references DTD or a schema, some parsers (called validating parsers) can read the DTD/Schema and check that theXML conforms to the DTD/Schema, the XML document is valid. For example, if in Figure 2.1 we were referencing DTD that specifies that BirthDate element must have Month, Date and Year, then the exclusion of Year element would invalidate the XML document detail2.xml.
However, the XML document would still be well formed, because it follows proper XML syntax (that is, it has one root element, each element has a start tag and an end tag, and the element are nested properly). By definition, a valid XML document is well formed.
Parsers that cannot check for document conformity against DTDs/schemas are nonvalidating parsers- they determine only whether an XML document is well-formed, not whether it is valid. Schema are XML documents themselves, whereas DTDs are not. XML processors are classified as validating or non-validating depending on whether or not they check XML documents for validity. A processor that discovers a validity error must be able to report it, but may continue normal processing.
Formatting and Manipulating XML Documents
XML document can be manipulated to appear differently on several devices. For example, the way XML document renders on Personal Digital Assistants (PDAs) is different from Desktop computers. Most XML documents contain only data. They do not include formatting instructions, so applications that process XML documents must look forhow to process, manipulate or display the data. Extensible Stylesheet
Language (XSL) can be used to specify rendering instructions for different platforms. XML-processing programs can also search, sort and manipulate XML data using XSL. Other popular XML-related technologies are: XPath XML Path Language (XPath), which is used for accessing parts of an XML document, XSL Formatting Objects (XSL-FO), which is a XML vocabulary used to describe document formatting, and XSL Transformations-language (XSLT) used for transforming XML documents into other documents.
Viewing an XML Document in Web Browser
Example 1 shows a simple listing of a text file for detail2.xml. This document does not contain formatting information for the detail2.xml. This is because XML is a tool for describing the structure, storage and transferring of data across disparate format/sources. Formatting and displaying data from an XML document is achieved in different ways within specific application platform. For instance, when the user loads detail2.xml in the Internet Explorer, MSXML (Microsoft XML Core Services) or Firefox, it will be parsed and display the document data. Each browser has a built-in style sheet to format the data.

The XML document will be displayed with colour-coded root and child elements. A plus (+) or minus sign (-) to the left of the elements can be clicked to expand or collapse the element structure. To view the raw XML source (without the + and - signs), select “View Page Source” or “View Source” from the browser menu.
Although these symbols are not part if the XML document, both browser place them next to every container element. A minus sign indicates that the browser is displaying the container element child element. Clicking the minus sign next to an element collapses that element (that is, it causes the browser to hide the container element’schildren) and replace the minus sign with a plus).
Conversely, clicking the plus sign next to an element expands the elements (that is, it causes the browser to display the container elements children and replace the plus sign with a minus sign).
This behaviour is similar to viewing the directory structure on one’s system in Windows Explorer or another similar directory viewer. In fact, a directory structure often is modelled as a series of tree structure in which the root of the tree represents a disk drive for instance C: and nodes in the tree represent directories. Parsers often store XML data as tree structure to facilitate efficient
Conclusion
Within the last two decades of the introduction of XML, it has been used to create hundreds of languages which include XHTML, WSDL for describing available web services, WAP and WML as markup languages for handheld devices, RSS languages for news feeds, RDF and OWL for describing resources and ontology, SMIL for describing multimedia for the web etc. In addition, XML-based formats have become the default for most office-productivity tools, including Microsoft Office (Office Open XML) and Apple's iWork.
Summary
XML describes data in a way that both human beings and computer can understand. It enhances the storage and exchange of data amongst disparate computer systems. So far you have learnt how to create, modify, validate, format, process and view XML documents in a browser.
References
Terry, F-M. (2009). Web Development and Design Foundations with     XHTML. USA : Pearson International Edition.
Deitel, P. J. & Deitel, H.M. (2008). Internet and World Wide Web: How to Program. (4th ed.). New Jersey, USA: Pearson Prentice Hal.

Monday, December 17, 2018

Search Engines and Current Web Programming Development Tools

Quite a number of collections of search tools are available today that allow users to find information on the Web quickly and easily. Two basic approaches have evolved in response to the need to organise and to locate information on the World Wide Web. These are directories and search engines.
A directory offers a hierarchy representation of hyperlinks to Web pages and presentation broken down into topics and subtopics. On the other hand, a search engine is a set of programs that is used to search for information within a specific realm and collate that information in a database. Although search engine is really a general class of programs, the term is often used to specifically describe Internet search engines like Google, Alta Vista and Excite. They enable users to search for documents on the World Wide Web, FTP servers and USENET newsgroups.
Search engines can also be devised for offline content, such as a library catalogue, the contents of a personal hard drive, or a catalogue of museum collections. Generally search engines help people to organise and display information in a way which makes it readily accessible.
Search Tools
A search tools is software that enables a user to quickly and easily gain access to information. The collection of search tools is constantly evolving with new ones coming on the scene and others disappearing. There are two basic approaches that have evolved in response to the need to organise and locate information on the World Wide Web and these are directories and search engines. Both approaches allow information about Web pages that is contained in some database that already has been created either manually or using special programs that search the Web pages to be assess quickly and easily. A request for information is answered by the search tool retrieving the information from its already-constructed database of indexed Web details. Other definitions that relate to searching information on the Web are as follows:
Search Terminology
Search tool: This refers to any mechanism for locating information on the Web. Examples include search or metasearch engine, and directory.
Metasearch engine: This refers to an all-in-one search engine that performs a search by calling on more than one other search engine to do the actual work.
Query: This refers to the information entered into a form on a search engine’s Web page that describes the information being sought.
Query Syntax: This term is used to describe, the set of rules describing what constitute a legal query on some search engines, special symbols may be used in a query.
Query Semantic: This term is used to describe a set of rules that defines the meaning of a query.
Hit: This refers to a URL that a search engine returns in response to a query.
Match: This is a synonym for hit.
Relevancy score: This refers to a value that indicates how close a match, a URL was to a query; usually expressed as a value from 1 to 100, with the higher score meaning more relevant.
Directories
The first method of finding and organising Web information is stated earlier is the directory approach. A directory offers a hierarchy representation of hyperlinks to Web pages and presentation broken down into topics and subtopics. The hierarchy can descend many levels. The specific number of levels is determined by the taxonomy of topics. Examples of popular general directories are;   Looksmart , Lycos, Dmoz Yahoo, etc.
Search Engines
The second approach to organising information and locating information on the Web is a search engine, which is a computer program that does the following:
1. allows a user to submit a form containing a query that consists of a word or phrase describing the specific information of interest to be located from the Web
2. searches its database to try to match your query
3. collate and returns a list of clickable URLs containing presentations that match the user’s query; the list is usually ordered with the better matches appearing at a the top
4. permits a user to revise and resubmit a query.
A recent survey ranking the market share of web search engine carried out by Net Marketshare in December 2010, showed;
• Google is 84.65%,
• Yahoo is 6.69%,
• Baidu is 3.39%,
• Bing is 3.29% and
• Other is 1.98%.
Components of a Search Engine
Search engines have the following components:
  1. User Interface
  2. Database
  3. Robot or Spider Software
1. User Interface
The user interface is a mechanism by which users submit queries to the search engine by typing a keyword or phrases to search into the text box. When the form is submitted, the data typed into the text box is sent to a server-side script that searches the database using the keywords entered.
Afterwards, search results are displayed in the browser containing a list of information, such as the URLs for Web pages that meet the users’ criteria. This result set is formatted with a link to each page along with additional information that might include the page title, a brief description, the first few lines of text, or the size of the page and arelevancy score for each hit. This way, the user is able to make an informed choice as to which hyperlinks to follow. Hyperlinks to help files are usually displayed prominently, and advertisement should not hinder a reader’s use of the search engine. The order in which pages are displayed may depend on paid advertisement, alphabetical order, and link popularity. Each search engine has its own policy for ordering the search results. The policies can change over time.
2.Database
A database is a collection of information organised so that its contents can easily be accessed, managed and updated. Databases management systems (DBMSs) such as Oracle, Microsoft SQL Server, Informix, MySQL or IBM DB2 are used to configure and manage the database. The databases associated with search engines are extremely large indexed pages that require a highly efficient search strategy to retrieve information from them. Computer scientists have spent years developing efficient several searching and sorting strategies, which are implemented in the search. The information displayed as results of your search is usually from the database accessed by the search engine site. Some search engines, such as AOL and Netscape use a database provided by Google.
3.Robot
A robot (sometimes called a spider) is a program that automatically traverses the hypertext structure of the Web by retrieving a Web page document and following the hyperlinks on the page. It moves like a robot spider on the Web, accessing and documenting Web pages. It requests pages from a website in the same way as Microsoft Explorer, or Firefox and any other browser does it. Spider does not collect images or formatting details. It is only interested in text and links and the URL from which they come.
The spider categorises the pages and stores information about the Web site and the Web pages in a database. Various robots may work differently, but in general, they access and may store important information on web pages such as title, meta takeyword, meta tag description, and some of the text on the page (usually either the first few sentences of the text contained in the heading tags). For multimedia elements in web pages to be indexed, the “alt” tag  should be used in order to have values in the search engines.
The spider software works in conjunction with the index software. This uses the information collected by the spider. The spider takes the information it has gathered about a web page and sends it to the index software where it is analysed and stored. The index makes sense of the mass of text, links and URLs using an algorithm, which refers to a complex mathematical formula that indexes the words, the pairs of words and so on.
The algorithm analyses the pages and links for word combinations to determine what the web pages are all about that is, what topics are being covered. Then, scores are assigned that allow the search engine to measure how relevant or important the web pages (and URLs) might be to the user or visitor. Major search engines such as Google, Yahoo or Bing use proprietary algorithm for scoring.
Listing in a Search Engine and Search Index
The components of a search engine (robot, database and search form) work together to obtain information about Web pages, store information about Web pages, and provide a graphical user interface to facilitate searching for and displaying a list of Web pages relevant to given key words. In recent times, search engines have become one of the top methods used to drive traffic to ecommerce sites. Though very effective, it is not always easy to get listed in a search engine or search directory. Recently, there is a trend away from free listing in search engines. Current trends entail paying for listing consideration in a search engine or directory. These approaches include an express submit or express inclusion, paying for preferential placement in search engine displays (called sponsoring or advertising), and paying each time a visitor clicks the search engine’s link to your site. Yahoo and Google use the terms Calls its Sponsor Results and Google AdWords respectively. In these programs, payment is made when the site is submitted for review. If accepted, the site has a listing usually at the top or right margin of the search results. In addition to the initial fee, the Web site owners must pay each time a visitor clicks on the search engine link to their site-this is called a cost-per-click (CPC).
A web search engine is designed to search for information on the World Wide Web, FTP servers USENET newsgroup, and so on. The search results, which may consist of web pages, images, information and other types of files, are generally presented in a list of results and are often called hits. Some search engines also mine data available in databases oropen directories. Unlike web directories, which are maintained by human editors, search engines operate algorithmically or are a mixture of algorithmic and human input. Search engines use automated software programs to survey the Web and build their databases. Web documents are retrieved by these programs and analysed. Data collected from each web page are then added to the search engine index. Each search engine uses a proprietary algorithm to create its indices such that, ideally, only meaningful results are returned for each query. The best URLs are then returned to the user as hits, ranked in order with the best results depending on the algorithm used by the search engine at the top.
Current Web Programming Development Tools
Advances in Internet technology have led to the release of several tools for Web development. Many of the tools are easy to use and made available to the public free of charge to aid in development. A popular example is the LAMP (Linux, Apache, MySQL, PHP) stack, which is usually distributed free of charge. The availability of free tools has greatly influenced the rate at which many people around the globe setup new Web sites daily. Easy to use software for Web development include amongst others: Adobe Dreamweaver, Netbeans, WebDev, or Microsoft Expression Studio, Adobe Flex, and so on.
By using these software, virtually anyone can develop a Web page in a matter of minutes. Knowledge of Hypertext Markup Language (HTML) or other programming language is not usually required, but is recommended for professional results. Newer generation of web development tools use the strong growth in LAMP, Java Platform, Enterprise Edition technologies and Microsoft .NET technologies to provide the Web as a way to run applications online. Web developers now help to deliver applications as Web services, which were traditionally only available as applications on a desk, based computer. Thus, instead of running executable code on a local computer, users can now interact with online applications to create new contents. This has enabled new methods in communication and allowed for many opportunities to decentralise information and media distribution. In this unit, we shall discuss other technologies, models and tools that enhance easy development of Web applications.
Web Services
There is no need reinventing the wheel with every new project. With web services, developers can use existing software solutions to create other feature-rich applications. A Web service is a self-describing, self-contained application that provides some business functionality through an Internet connection. For example, an organisation could create a Web service to facilitate information exchange with its partner or vendor. Web services make software functionality available over the Internet so that programs like PHP, ASP, JSP, JavaBeans, the COM object, and all other favourite widgets can make a request to a program running on another server (a web service) and use that program’s response in a website, Wireless Application Protocol (WAP) service, or other applications. The Universal Description, Discovery and Integration (UDDI) is a directory of web service interfaces and is used for storing information about web services. It also provides a method of describing a service, involving a service, and locating available services.
It is described by Web Services Description Language (WSDL). UDDI communicates via Simple Object Access Protocol (SOAP). SOAP is a communication protocol, which is language independent and based on XML. WSDL is based on XML and is used to describe and locate Web services. Other systems interact with the Web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialisation in conjunction with other Web related standards.
Although UDDI is built into the Microsoft .NET platform, it is standard and backed by a number of technology companies, including IBM, Microsoft, and Sun Microsystem. Theincorporation of Web services into new programs allows the speedy development of new applications. The use of Web Application Programming Interface (Web API) is a current trend in Web 2.0 development in Web services where emphasis has been moving away from SOAP based services towards representational state transfer (REST) based communications.
REST services do not require XML, SOAP, or WSDL service-API definitions. Web API is typically a defined set of Hypertext Transfer Protocol (HTTP) request messages along with a definition of the structure of response messages, usually expressed in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. It allows the combination of multiple Web services into new applications known as mashups.
Fundamentally, Web services is all about having a service, publishing an API for use by other services on the network and encapsulating implementation details. The following essential services are expected in any service-oriented environment such as in Web services.
• A Web service needs to be created, and its interfaces and invocation methods must be well defined.
• A Web service needs to be published to one or more repositories (intranet or Internet) for potential users to locate.
• A Web service needs to be located to be invoked by potential users.
• A Web service needs to be invoked to be of any benefit.
• A Web service may need to be unpublished when it is no longer available or needed.
Cloud Computing
Cloud computing refers to the use and access of multiple server-based computational resources via a digital network (WAN, Internet connection using the World Wide Web, and so on). Cloud users may access the server resources using a computer, netbook, pad computer, smart phone, PDA, or other devices. In cloud computing, applications are provided and managed by the cloud server and data are stored remotely in the cloud configuration. Users do not download and install applications on their own device or computer; all processing and storage is maintained by the cloud server.
The on-line services are usually offered by a cloud provider or by a private organisation. Before the advent of cloud computing, tasks such as using word processing would not be possible without the installation of application software on a user’s computer. A user would need to purchase a license for each application from a software vendor and obtained the right to install the application on one computer system.
As computer technologies advanced, local area networks (LAN) and more networking capabilities, the client-server model of computing were born, where server computers with enhanced capabilities and large storage devices could be used to host application services and data for a large workgroup. In a client server computing environment, a network-friendly client version of the application was required on client computers, which utilised the client system's resources (memory and CPU for processing), even though the resultant application data files (such as word processing documents) were stored centrally on the data servers. In this case, many users on a network purchased multiple user licenses of an application for use. Cloud computing differs from the classic client-server model discussed in module one of this course material, by providing applications from a server that are executed and managed by a client's web browser, with no installed client version of an application required.
Cloud computing provides computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. One may compare this scenario with the concept drawn from the electricity grid, wherein end-users consume power without needing to understand the component devices or infrastructure required to provide the service. The reason behind centralisation is to give cloud service providers complete control over the versions of the browser-based applications provided to clients, which removes the need for version upgrades or license management on individual client computing devices.
Blogs
A blog is a blend of the term “Web log.” It is a type of Website or part of a Website. Many blogs provide commentary or news on a particular subject; others function as more personal online diaries. A typical blog combines text, images, and links to other blogs, Web pages, and other media related to its topic. The ability of readers to leave comments in an interactive format is an important part of many blogs. Most blogs are primarily textual, although some focus on art (art blog), photographs (photoblog), videos (video blogging), music (MP3 blog), and audio (podcasting).
Microblogging is another type of blogging, featuring very short posts. Most blogs are interactive, allowing visitors to leave comments and even communicate with each other via widgets on the blogs. This interactivity distinguishes them from other static websites. Entries are commonly displayed in reverse-chronological order. Many blogs are hosted at blog communities such as http://blogspot.com.
RSS
Really Simple Syndication or Rich Site Summary (RSS) is commonly used to create newsfeed from blog postings and other Web sites. The RSS feeds contain a summary of new items posted to the site. Web feeds benefit publishers by letting them syndicate content automatically. They benefit readers who want to subscribe to timely updates from favoured websites or to aggregate feeds from many sites into one place. RSS feeds can be read using software called an “RSS reader”, “feed reader”, or “aggregator”, which can be web-based, desktop-based, or mobile device-based. Some browser, such as Firefox, Safari, and Internet 7 can display RSS feeds. A standardised XML file format allows the information to be published once and viewed by many different programs.
The user subscribes to a feed by entering into the reader the feed’s URL or by clicking a feed icon in a Web browser that initiates the subscription process. The RSS reader checks the user’s subscribed feeds regularly for new work, downloads any updates that it finds, and provides a user interface to monitor and read the feeds. RSS allows users to avoid inspecting all of the websites they are interested in manuallyand instead subscribe to Websites such that all new content is pushed onto their browsers when it becomes available. By providing up-to-date, linkable content for anyone to use, RSS enables website developers to draw more traffic. It also allows users to get news and information from many sources easily and reduces content developers time. RSS simplifies importing information from portals, weblogs and news sites. Any piece of information can be syndicated via RSS, not just news.
Podcasts
Podcasts are typically audio files, delivered by an RSS feed on the Web. They may also be made available by recording an MP3 file and providing a link on a Web page. They usually would take the format of an audio blog, interview or radio show. These files can be saved to your computer or to an MP3 player (such as iPod) for later listening.
Wiki
A wiki is a Web site that allows immediate update by visitors using a simple form on a Web page at any time. Some wikis are designed to serve a small group of people such as the members of an organisation.The most powerful and popular wiki is Wikipedia which is accessible at the URL (http:://Wikipedia.org).
It is an online encyclopaedia, which can be updated by any registered user at anytime. Wiki is a form of social software in action where visitors sharing their collective knowledge can create a resource freely used by all. Though there have been isolated cases of practical jokes and occasionally inaccurate information posted at Wikipedia, the information and resources provided is still good enough as starting point when exploring a topic.
Microformat
Microformat is a standard format for representing information aggregate
that can be understood by computers thereby enabling easier access and retrieval of information. It could also lead to new types of applications/services on the Web. Some people consider the web as containing loose information while others see logical aggregates, business cards, resume, events, etc.
The need to organise information on the Web cannot be overemphasised. Microformat standard encourage sites to organise their information such that its increases interoperability and accessibility. For example, if one wants to create an event or an events calendar, one could use the hCalalender microformat. Some other available microformats are the adr for address information, hresume for resume and xfolk for collections of bookmarks.
Resources Description Framework (RDF)
The Resource Description Framework (RDF), developed by the World Wide Web consortium (W3C) is one way of making the Web more meaningful. It is based on XML and used to describe content in a way that is understood by computers. RDF helps connect isolated databases across the web with consistent semantics. The structure of any expression in RDF is a collection of triples. RDF triples consist of two pieces of information (subject and object) and linking fact (predicate).
Ontologies
Advances in Internet technologies makes items on the Web to be organised in such a way that meaning can be easily derived from them. Ontologies are ways of organising and describing related items, and are used to represent semantics. It serves as a means of cataloguing Internet content in a way that can be understood by the computers. RDF and OWL (Web Ontology Language) are designed for formatting ontologies.
Application Programming Interface (APIs)
Application Programming Interface (APIs) provides application with access to external services and databases. For example, a traditional programming API, like the Sun’s Java API, allows programmers to use already-written methods and functions in their programs. In addition, Web services have APIs that permit their functionality and information to be shared or used across the internet. Most major Web 2.0 companies (for example, eBay, Amazon, Google, Yahoo! and Flickr) provide APIs to encourage use of their services and data in the development of mashups, widgets or gadgets.
Mashups
Mashups is a means of combining contents or functionality from existing Web services, Websites and RSS feeds or other solutions to serve a new purpose. For example, a skilled developer could mashup Google Maps with a tourist site to create more exciting services/sites on the Internet. The use of APIs helps to save lots of time and money in mashups processes of combining two or more applications to create others. Its possible to build great mashups in a day. Please, note that the mashup may rely on one or more third parties software. Thus, if the API provider experiences downtime, the mashup will be unavailable as well because of the dependence. The way out will be to use mashup that are programmed to avoid sites that could be down.
Widgets and Gadgets
Widgets are commonly referred to as gadgets. They are mini applications designed to run either as stand alone or as add-on features in Web pages. Widgets can be used to for the personalization of a user’s Internet experience. Some personalised services may include the display of real-time weather conditions, viewing of maps, receiving event reminder, providing easy access to search engines, aggregating RSS feeds, and so on. The robustness of web services, APIs and other related tools make it easy to develop Widgets. Several catalogs of widgets exist online with the most all-inclusive being Widgipedia which provides an extensive widgets and gadgets for a variety of platform.
Web 2.0
The term “Web 2.0” is associated with Web applications that facilitate participatory information sharing, interoperability, user-centred design, and collaboration on the World Wide Web. A Web 2.0 site allows users interact and collaborate with each other in a social media dialogue as creators (prosumers) of user-generated content in a virtual community, in contrast to websites where users (consumers) are limited to the passive viewing of content that was created for them. Examples of Web2.0 include social networking sites, blogs, wikis, video sharing sites, hosted services, web applications, mashups and folksonomies.
Web 2.0 websites allow users to do more than just retrieve information. By increasing what was already possible in Web 1.0, they provide the user with more user-interface, software and storage facilities, all through their browser. Users can provide the data that is on a Web 2.0 site and exercise some control over that data. These sites may have an “Architecture of participation” that encourages users to add value to the application as they use it. The Web 2.0 offers all users the same freedom to contribute.
Web 2.0 Tools
The client-side/web browser technologies used in Web 2.0 development are Asynchronous JavaScript and XML (Ajax), Adobe Flash and the Adobe Flex framework, and JavaScript/Ajax Dojo Toolkit, MooTools, jQuery, and so on. Ajax programming uses JavaScript to upload and download new data from the web server without undergoing a full page reload. To allow users to continue to interact with the page, communications such as data requests going to the server are separatedfrom data coming back to the page (asynchronously).
Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases overall performance of the site,as the sending of requests can complete quicker independent of blocking and queuing required sending data back to the client. The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript Object Notation) format, which constitute the two widely, used structured data formats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their web application. When this data is received via Ajax, the JavaScript program then uses the Document Object Model (DOM) to dynamically update the web page based on the new data, allowing for a rapid and interactive user experience. In short, using these techniques, Web designers can make their pages function like desktop applications. For example, Google Docs uses this technique to create a Web based word processor.
Adobe Flex is another technology often used in Web 2.0 applications. Compared to JavaScript libraries like jQuery, Flex makes it easier for programmers to populate large data grids, charts, and other heavy user interactions.[ Applications programmed in Flex, are compiled and displayed as Flash within the browser. Flash is capable of doing many things which were not possible pre-HTML5, the language used to construct web pages. Out of the many capabilities, of Flash, the most commonly used in Web 2.0 is its ability to play audio and video files. This has allowed for the creation of Web 2.0 sites where video media is seamlessly integrated with standard HTML. In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks do not use technology any different from JavaScript, Ajax, and the DOM.
What frameworks do is smooth over inconsistencies between web browsers and extends the functionality available to developers. Many of them also come with customisable, prefabricated “widgets” that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel. On the server side, Web 2.0 uses many of the same technologies as Web1.0. New languages such as PHP, Ruby, Perl, Python, JSP and ASP are used by developers to dynamically output data using information from files and databases. What has begun to change in Web 2.0 is the way this data is formatted. In the early days of the Internet, there was little need for different websites to communicate with each other and share data. In the new “participatory web”.
However, sharing data between sites has become an essential capability. To share its data with other sites, a website must be able to generate output in machine-readable formats such as XML (Atom, RSS, etc) and JSON. When a site’s data is available in one of these formats, another website can use it to integrate a portion of that site's functionality into itself, linking the two together. This is one of the hallmarks of the philosophy behind the Web3.15
XHTML
eXtensible Hypertext Markup Language (XHTML) is the newer version of HTML. XHTML combines the formatting strengths of HTML and the data structures and extensibility strengths of XML to deploy applications for device-independent Web access. XHTML uses the tags and attributes of HTML along with the syntax to XML. Using HTML to write application that runs on electronic devices with fewer resources such as a personal digital assistant (PDA) or mobile phone could be an issue. However, this can be accomplished in XHTML since it is more of a descriptive language (unlike HTML) than a structure language.
Summary & Conclusion
A web search engine is designed to search for information on the World Wide Web, FTP servers USENET newsgroup, and so on. The search results, which may consist of web pages, images, information and other types of files, are generally presented in a list of results and are often called hits. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained by human editors, search engines operate algorithmically or are a mixture of algorithmic and human input.
Search engines use automated software programs to survey the Web and build their databases. Web documents are retrieved by these programs and analysed. Data collected from each web page are then added to the search engine index. Each search engine uses a proprietary algorithm to create its indices such that, ideally, only meaningful results are returned for each query. The best URLs are then returned to the user as hits, ranked in order with the best results depending on the algorithm used by the search engine at the top.
The Internet is playing a great role in the delivery of contents to users all across the world. Many researches are going on every day to make it more accessible, available, interactive, meaningful and responsive to users’ needs.
Most of the information in this discourse has been presented for you to keep up-to-date with current internet and web programming developments.

RECENT POST

2019 Election Predictions With State Voting Percentages

Here are the predictions for the forthcoming elections; 1. SOUTH WEST- (6 States) 0ver 18m votes Oyo - Buhari 70%, Atiku 30% Ekiti -...

FEATURE POSTS