Custom Search

er

Global Student Forum Bangladesh • Live Radio
© WWW.GSFBD.WEEBLY.COM GSFBD।

Thursday, December 10, 2009

RADIO




















Global Student Forum Bangladesh • Live Radio











© WWW.GSFBD.WEEBLY.COM GSFBD।



GSFBD Live Radio

Choice your radio

Sunday, May 24, 2009

Input/Output




Input/output (I/O)
Main article: Input/output

Hard disks are common I/O devices used with computers.
I/O is the means by which a computer exchanges information with the outside world.[25] Devices that provide input or output to the computer are called peripherals.[26] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.




Multitasking
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[27]
One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.[28]
Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly — in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

Multiprocessing
Main article: Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.
Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[29] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

Networking and the Internet
Main articles: Computer networking and Internet

Visualization of a portion of the routes on the Internet.
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.[30]
In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET.[31] The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments

Memory


Memory
Main article: Computer storage

Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[24]
In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Tuesday, May 19, 2009

Monitor

Monitor:
Visual display unit, a device that displays images generated by computersMonitor (synchronization), an approach to synchronize two or more computer tasks that use a shared resourceFoldback (sound engineering), the use of rear-facing speakers on stage during a music performanceMachine code monitor, software allowing users to view or change memory locations on a computerMedical monitor, automated medical device that senses and displays a patient's vital signsStudio monitor, a loudspeaker designed for audio production applicationsSupervisory control, a system that measures or controls dataVideo monitor, a device that displays output from a video-generating deviceVirtual Machine Monitor, software which virtualizes a computer hardware platform, allowing multiple system images to run simultaneously.WarshipsMonitor (warship), a small warship with one or several very large gunsRiver monitor, a class of warship used in inland waterwaysUSS Monitor, a United States Navy warship used during the American Civil WarUSS Monitor (LSV-5) a United States Navy vehicle landing shipCompanies Media and EntertainmentMonitor (BBC TV)Monitor (comics), a DC comics characterMonitor (NBC Radio)Monitor, a project management software companyRepublic record labelPlace namesLoope, California, formerly MonitorMonitor, Indiana, town in the United StatesMonitor, Oregon, unincorporated community in the United StatesMonitor, Alberta, CanadaOther meaningsWater jet used in hydraulic miningFire monitor, a water jet used for firefightingMonitor (NHS), a United Kingdom government agency also known as the Independent Regulator for NHS Foundation TrustsMonitor lizard, a lizardHall monitor, a student who supervises the corridors of a school

Hardwar

Input Hardware: bitpad, document reader, graphics tablet, I/O, joystick, keyboard




Bitpad:
AND INCIDENTALLY.(Column)The Evening Standard (London, England) ; March 7, 2006 ; 383 words ...alleviate the fatigue of book tours. From any location, perhaps from another continent, the writer can sign his or her name on to a bitpad , using a magnetic pen. At the other end, a robot arm holding a real pen descends upon the page and reproduces the writing exactly... Press vendor goes desktop; major press distributor converts to Mac-based,...Graphic Arts Monthly ; April 1, 1989 ; 700+ words ...are high resolution. A Dest PC Scan 1000 flatbed scanner is used to scan photos for layout purposes only, and a Summagraphics BitPad Plus graphic tablet is used occasionally for drawing. A Shiva Netmodem 2400 is used by all to output and receive files via telephone...
document reader
Proccing Hardware: RAM,ROM,processor,BIOS etc.
Output Hardwar: monitors, printers, plotters, electiral transducers, lights, motors, buzzers etc.

Computer



Computer
Computer: A computer is a machine that manipulates


data according to a list of instructions. Although


mechanical examples of computers have existed


throughout history, the first resembling a modern


computer were developed in the mid-20th century


(1940–1945). The first electronic


computers were the size of a large room,


consuming as much


power as several hundred modern personal


computers (PC).[1] Modern computers based on tiny


integrated circuits are millions to billions of times more


capable than the early machines, and occupy a fraction


of the space.[2] Simple computers are small enough to


fit into a wristwatch, and can be powered by a watch battery.


Personal computers in their various forms are icons of the


Information Age, what most people think of as a "computer", but the embedded computers found in devices ranging from fighter aircraft to industrial robots, digital cameras, and children's toys are the most numerous.








HISTORY OF COMPUTER:
Main article: History of computer hardware

The Jacquard loom was one of the first programmable devices.
The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.[3]
The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[4] This is the essence of programmability.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer.[5] It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour,[6][7] and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed to compensate for the changing lengths of day and night throughout the year.[5]
The end of the Middle Ages saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.
In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine.[8] Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.
In the late 1880s Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..."[9] To process these punched cards he invented the tabulator, and the key punch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time Magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine." [10]
George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November of 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.[11]




Defining characteristics of some early digital computers of the 1940s (In the history of computing hardware)
Name
First operational
Numeral system
Computing mechanism
Programming
Turing complete
Zuse Z3 (Germany)
May 1941
Binary
Electro-mechanical
Program-controlled by punched film stock (but no conditional branch)
Yes (1998)
Atanasoff–Berry Computer (US)
1942
Binary
Electronic
Not programmable—single purpose
No
Colossus Mark 1 (UK)
February 1944
Binary
Electronic
Program-controlled by patch cables and switches
No
Harvard Mark I – IBM ASCC (US)
May 1944
Decimal
Electro-mechanical
Program-controlled by 24-channel punched paper tape (but no conditional branch)
No
Colossus Mark 2 (UK)
June 1944
Binary
Electronic
Program-controlled by patch cables and switches
No
ENIAC (US)
July 1946
Decimal
Electronic
Program-controlled by patch cables and switches
Yes
Manchester Small-Scale Experimental Machine (UK)
June 1948
Binary
Electronic
Stored-program in Williams cathode ray tube memory
Yes
Modified ENIAC (US)
September 1948
Decimal
Electronic
Program-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program ROM
Yes
EDSAC (UK)
May 1949
Binary
Electronic
Stored-program in mercury delay line memory
Yes
Manchester Mark 1 (UK)
October 1949
Binary
Electronic
Stored-program in Williams cathode ray tube memory and magnetic drum memory
Yes
CSIRAC (Australia)
November 1949
Binary
Electronic
Stored-program in mercury delay line memory
Yes