Video card
In computing, a video card, graphics card or graphics accelerator is a special circuit board that controls what is shown on a computer monitor and calculates 3D images and graphics.
A video card can be used to display a two-dimensional (2D) image like a desktop, or a three-dimensional (3D) image like a computer game. Computer-Aided Design (CAD) programs are often used by architects, engineers and designers to create 3D models on their computers. If a computer has a very fast video card, the user can create very detailed 3D models.
Most computers have basic video and graphics capabilities built-in to the computer's motherboard. These "integrated" video chips are not as fast as in separate or "discrete" video cards. Normally, they are fast enough for basic computer use and old or simple computer games on lower graphic settings. If a computer user wants faster and/or more detailed graphics, a video card can be installed.
Hardware
changeVideo cards have their own processor (called a Graphics Processing Unit or GPU). The GPU is distinct from the main computer processor (called the Central Processing Unit or CPU). The CPU's job is to process the calculations needed to make the computer function. The GPU's job is to handle graphics calculations. 3D graphics calculations take a lot of CPU power, so having a video card to handle the graphics calculations lets the CPU work on other things like running computer programs.
Video cards also have their own memory, separate from the main computer memory. It is usually much faster than main computer memory, too. This helps the GPU do its graphics calculations even faster. Most video cards also can make one computer use more than one computer monitor at one time. Graphics manufacturers Nvidia and AMD (Advanced Micro Devices) have special technologies that allow two identical cards to be linked together in a single computer for much faster performance. Nvidia calls their technology SLI and AMD calls their technology CrossFire. Some modern graphics cards can even process physics calculations to create even more realistic-looking 3D worlds.
Video cards typically connect to a motherboard using the Peripheral Component Interconnect (PCI), the Advanced Graphics Port (AGP) or the Peripheral Component Interconnect Express (PCI Express or PCI-E). PCI-E is the newest and fastest connection; which nearly all modern video cards and motherboards have this connection. Before PCI-E was used, AGP was the standard connection for video cards. Before AGP, video cards were designed for PCI (sometimes called "regular" PCI).
History
changeIn early computing years, graphics processing was very basic and could be done by the CPU along with all the other processing. However, as computer games advanced and started using 3D graphics, the CPU had too much to do and CPU-makers could not keep up on making them faster. Eventually, video cards, with their own GPU, were invented to solve this problem. This lets the CPU do more of its own work since it does not have to spend any time on advanced graphics calculations; it can simply pass these calculations off to the GPU to be done.
The first video cards connected to the motherboard via the ISA connection. The first popular non-IBM video cards were manufactured by a company called Hercules Computer Technology, Inc. Throughout the years, the importance of video cards has grown. As they evolved, a new connection standard was developed called Advanced Graphics Port (AGP). This was the first motherboard connection designed exclusively for video cards. It was much faster at transferring information between the video card and the rest of the computer. Eventually, the AGP connection became outdated, and a new connection, called PCI Express (PCI-E), became the standard for video cards. Most video cards manufactured today use PCI-E to connect to the motherboard.