Sunday, April 27, 2008

Electric power transmission

  • Electric power transmission

"Power line" redirects here. For the conservative U.S. blog, see Power Line. For the telecommunication technology, see Power line communication."Power grid" redirects here. For the board game, see Power Grid (board game).



Electric power transmission, a process in the delivery of electricity to consumers, is the bulk transfer of electrical power. Typically, power transmission is between the power plant and a substation near a populated area. Electricity distribution is the delivery from the substation to the consumers. Electric power transmission allows distant energy sources (such as hydroelectric power plants) to be connected to consumers in population centers, and may allow exploitation of low-grade fuel resources that would otherwise be too costly to transport to generating facilities.
Due to the large amount of power involved, transmission normally takes place at high voltage (110 kV or above). Electricity is usually transmitted over long distance through overhead power transmission lines. Underground power transmission is used only in densely populated areas due to its high cost of installation and maintenance, and because the high reactive power produces large charging currents and difficulties in voltage management.A power transmission system is sometimes referred to colloquially as a "grid"; however, for reasons of economy, the network is not a mathematical grid. Redundant paths and lines are provided so that power can be routed from any power plant to any load center, through a variety of routes, based on the economics of the transmission path and the cost of power. Much analysis is done by transmission companies to determine the maximum reliable capacity of each line, which, due to system stability considerations, may be less than the physical or thermal limit of the line. Deregulation of electricity companies in many countries has led to renewed interest in reliable economic design of transmission networks. However, in some places the gaming of a deregulated energy system has led to disaster, such as that which occurred during the California electricity crisis of 2000 and 2001.







  • AC power transmission
AC power transmission is the transmission of electric power by alternating current. Usually transmission lines use three phase AC current. Single phase AC current is sometimes used in a railway electrification system. In urban areas, trains may be powered by DC at 600 volts or so.Overhead conductors are not covered by insulation. The conductor material is nearly always an aluminum alloy, made into several strands and possibly reinforced with steel strands. Conductors are a commodity supplied by several companies worldwide. Improved conductor material and shapes are regularly used to allow increased capacity and modernize transmission circuits. Conductor sizes in overhead transmission work range in size from #6 American wire gauge (about 12 square millimetres) to 1,590,000 circular mils area (about 750 square millimetres), with varying resistance and current-carrying capacity. Thicker wires would lead to a relatively small increase in capacity due to the skin effect, that causes most of the current to flow close to the surface of the wire.Today, transmission-level voltages are usually considered to be 110 kV and above. Lower voltages such as 66 kV and 33 kV are usually considered sub-transmission voltages but are occasionally used on long lines with light loads. Voltages less than 33 kV are usually used for distribution. Voltages above 230 kV are considered extra high voltage and require different designs compared to equipment used at lower voltages.Overhead transmission lines are uninsulated wire, so design of these lines requires minimum clearances to be observed to maintain safety.


Bulk power transmission

Engineers design transmission networks to transport the energy as efficiently as feasible, while at the same time taking into account economic factors, network safety and redundancy. These networks use components such as power lines, cables, circuit breakers, switches and transformers.





A transmission substation decreases the voltage of electricity coming in allowing it to connect from long distance, high voltage transmission, to local, lower voltage, distribution. It also rerouts power to other transmission lines that serve local markets. The substation may also "reboost" power allowing it to travel greater distances from the power generation source along the high voltage transmission lines.This is the PacifiCorp Hale Substation, Orem, Utah.Transmission efficiency is improved by increasing the voltage using a step-up transformer, which reduces the current in the conductors, while keeping the power transmitted nearly equal to the power input. The reduced current flowing through the conductor reduces the losses in the conductor and since, according to Joule's Law, the losses are proportional to the square of the current, halving the current makes the transmission loss one quarter the original value.A transmission grid is a network of power stations, transmission circuits, and substations. Energy is usually transmitted within the grid with three-phase AC. DC systems require relatively costly conversion equipment which may be economically justified for particular projects. Single phase AC is used only for distribution to end users since it is not usable for large polyphase induction motors. In the 19th century two-phase transmission was used, but required either three wires with unequal currents or four wires. Higher order phase systems require more than three wires, but deliver marginal benefits.The capital cost of electric power stations is so high, and electric demand is so variable, that it is often cheaper to import some portion of the variable load than to generate it locally. Because nearby loads are often correlated (hot weather in the Southwest portion of the United States might cause many people there to turn on their air conditioners), imported electricity must often come from far away. Because of the economics of load balancing, transmission grids now span across countries and even large portions of continents. The web of interconnections between power producers and consumers ensures that power can flow even if a few links are inoperative.The unvarying (or slowly varying over many hours) portion of the electric demand is known as the "base load", and is generally served best by large facilities (and therefore efficient due to economies of scale) with low variable costs for fuel and operations, i.e. nuclear, coal, hydro. Renewables such as solar, wind, ocean/tidal, etc. are not considered "base load" but can still add power to the grid. Smaller- and higher-cost sources are then added as needed.Long-distance transmission of electricity (thousands of miles) is cheap and efficient, with costs of US$ 0.005 to 0.02 per kilowatt-hour (compared to annual averaged large producer costs of US$ 0.01 to US$ 0.025 per kilowatt-hour, retail rates upwards of US$ 0.10 per kilowatt-hour, and multiples of retail for instantaneous suppliers at unpredicted highest demand moments). Thus distant suppliers can be cheaper than local sources (e.g. New York City buys a lot of electricity from Canada). Multiple local sources (even if more expensive and infrequently used) can make the transmission grid more fault tolerant to weather and other disasters that can disconnect distant suppliers.Long distance transmission allows remote renewable energy resources to be used to displace fossil fuel consumption. Hydro and wind sources can't be moved closer to high population cities, and solar costs are lowest in remote areas where local power needs are the least. Connection costs alone can determine whether any particular renewable alternative is economically sensible. Costs can be prohibitive for transmission lines.


Grid input



At the generating plants the energy is produced at a relatively low voltage of up to 30 kV (Grigsby, 2001, p. 4-4), then stepped up by the power station transformer to a higher voltage (115 kV to 765 kV AC, ± 250-500 kV DC, varying by country) for transmission over long distances to grid exit points (substations).

Losses

Transmitting electricity at high voltage reduces the fraction of energy lost to Joule heating. For a given amount of power, a higher voltage reduces the current and thus the resistive losses in the conductor. For example, raising the voltage by a factor of 10 reduces the current by a corresponding factor of 10 and therefore the losses by a factor of 100, provided the same sized conductors are used in both cases. Even if the conductor size is reduced x10 to match the lower current the losses are still reduced x10. Long distance transmission is typically done with overhead lines at voltages of 115 to 1,200 kV. However, at extremely high voltages, more than 2,000 kV between conductor and ground, corona discharge losses are so large that they can offset the lower resistance loss in the line conductors.Transmission and distribution losses in the USA were estimated at 7.2% in 1995 [2], and in the UK at 7.4% in 1998. [3]As of 1980, the longest cost-effective distance for electricity was 4,000 miles (7,000 km), although all present transmission lines are considerably shorter. (see Present Limits of High-Voltage Transmission)In an alternating current transmission line, the inductance and capacitance of the line conductors can be significant. The currents that flow in these components of transmission line impedance constitute reactive power, which transmits no energy to the load. Reactive current flow causes extra losses in the transmission circuit. The ratio of real power (transmitted to the load) to apparent power is the power factor. As reactive current increases, the reactive power increases and the power factor decreases. For systems with low power factors, losses are higher than for systems with high power factors. Utilities add capacitor banks and other components throughout the system — such as phase-shifting transformers, static VAR compensators, and flexible AC transmission systems (FACTS) — to control reactive power flow for reduction of losses and stabilization of system voltage.
Electrical power is always partially lost by transmission. This applies to short distances such as between components on a printed circuit board as well as to cross country high voltage lines. The major component of power loss is due to ohmic losses in the conductors and is equal to the product of the resistance of the wire and the square of the current: For a system which delivers a power, P, at unity power factor at a particular voltage, V, the current flowing through the cables is given by . Thus, the power lost in the lines, Therefore, the power lost is proportional to the resistance and inversely proportional to the square of the voltage. A higher transmission voltage reduces the current and thus the power lost during transmission.In addition, a low resistance is desirable in the cable. While copper cable could be used, aluminium alloy is preferred due to its much better conductivity to weight ratio making it lighter to support, as well as its lower cost. The aluminium is normally mechanically supported on a steel core.

HVDC

High voltage direct current (HVDC) is used to transmit large amounts of power over long distances or for interconnections between asynchronous grids. When electrical energy is required to be transmitted over very long distances, it can be more economical to transmit using direct current instead of alternating current. For a long transmission line, the value of the smaller losses, and reduced construction cost of a DC line, can offset the additional cost of converter stations at each end of the line. Also, at high AC voltages significant (although economically acceptable) amounts of energy are lost due to corona discharge, the capacitance between phases or, in the case of buried cables, between phases and the soil or water in which the cable is buried.HVDC links are sometimes used to stabilize against control problems with the AC electricity flow. In other words, to transmit AC power as AC when needed in either direction between Seattle and Boston would require the (highly challenging) continuous real-time adjustment of the relative phase of the two electrical grids. With HVDC instead the interconnection would: (1) Convert AC in Seattle into HVDC. (2) Use HVDC for the three thousand miles of cross country transmission. Then (3) convert the HVDC to locally synchronized AC in Boston, and optionally in other cooperating cities along the transmission route. One prominent example of such a transmission line is the Pacific DC Intertie located in the Western United States.

Grid exit

At the substations, transformers are again used to step the voltage down to a lower voltage for distribution to commercial and residential users. This distribution is accomplished with a combination of sub-transmission (33 kV to 115 kV, varying by country and customer requirements) and distribution (3.3 to 25 kV). Finally, at the point of use, the energy is transformed to low voltage (100 to 600 V, varying by country and customer requirements).

Limitations

The amount of power that can be sent over a transmission line is limited. The origins of the limits vary depending on the length of the line. For a short line, the heating of conductors due to line losses sets a "thermal" limit. If too much current is drawn, conductors may sag too close to the ground, or conductors and equipment may be damaged by overheating. For intermediate-length lines on the order of 100 km (60 miles), the limit is set by the voltage drop in the line. For longer AC lines, system stability sets the limit to the power that can be transferred. Approximately, the power flowing over an AC line is proportional to the sine of the phase angle between the receiving and transmitting ends. Since this angle varies depending on system loading and generation, it is undesirable for the angle to approach 90 degrees. Very approximately, the allowable product of line length and maximum load is proportional to the square of the system voltage. Series capacitors or phase-shifting transformers are used on long lines to improve stability. High-voltage direct current lines are restricted only by thermal and voltage drop limits, since the phase angle is not material to their operation.

Communications

Operators of long transmission lines require reliable communications for control of the power grid and, often, associated generation and distribution facilities. Fault-sensing protection relays at each end of the line must communicate to monitor the flow of power into and out of the protected line section so that faulted conductors or equipment can be quickly de-energized and the balance of the system restored. Protection of the transmission line from short circuits and other faults is usually so critical that common carrier telecommunications are insufficiently reliable. In remote areas a common carrier may not be available at all.

Malayalam Font& Common Fonts

Malyalam Font Download here:-
****Download***

Download Adobe Flash Player 9.0

Wednesday, April 16, 2008

Download Acrobat Reader


Networking Computer Systems

Networking Computer SystemsUsing Ethernet

Trash networking means 'Ethernet'
Networking is a complex subject – although it needn't be. For example, a large corporate network might link hundreds or thousands of networked workstations, all communicating and sharing data through a series of large buildings. But if you only want to link two to twenty systems you needn't get bogged down in the minutiae of how networks operate.
There are many different networking standards in use – each internationally agreed by the Institute of Electrical and Electronic Engineers (IEEE). For linking PCs, the easiest option is to use 'Ethernet' (see box, below). There's plenty of old equipment in circulation, and even new equipment isn't prohibitively expensive to obtain.

Using Ethernet requires that each system be fitted with a 'network interface card', or NIC. The NIC allows the PC to communicate with the network. If the PC doesn't have this already, you'll have to fit one. The thing to beware of here is the type of card you can get hold of (whether it's an ISA slot or PCI slot card). Also, the speed the card operates at (see discussion on speed in relation to hubs below). Check the the hardware compatibility data for your Linux distribution to see which NICs are supported.
With the older 'thinnet' system, PCs were connected together in a long cable or 'bus', one to the next. Simple, but the RG-58 cables are expensive to buy (because they're not so much these days), and the BNC connectors are a pain to fit. Thinnet also has problems when there are a lot of machines using the network at once. For this reason we prefer to use 'twisted pair'. Twisted pair cable is easier to get hold of, the connectors are a lot easier to fit. On UTP networks, cabling faults in the system are less likely to bring the whole network down – only one machine is affected rather than isolating a whole block of clients.
The other component you need is a hub. The hub connects the systems together, one hub port/UTP cable per client. The role of the hub is to manage the movement of data over the network. Hubs are rated according to speed. There are a lot of old 10Base-T '10-meg' hubs around. Most new 100Base-T '100-meg' hubs are dual speed, and will work at the speed of the NIC connected to the hub. If you only have old 10-meg NICs, then a 10-meg hub will be OK. But it's a waste of resources to use a lot of 100-meg NICs (most 100-meg NICs are dual speed too) on a network with a 10-meg hub – upgrade to a 100-meg dual speed hub instead.

Networks and operating systems
To run a network you need a server. This controls the network, and organises the network services that the clients on the network use. You have two options: use a proprietary system, or use Gnu/Linux. Proprietary systems are usually additional to the operating systems, whilst with Linux all the required software comes as standard with the Linux distribution.
Proprietary network systems, like Microsoft's Windows NT/Windows Server, or Novell's Netware, can be obtained from computer fairs. But to remain legal you not only have to have a valid license for the program – you also have sufficient client licenses to cover the maximum number of systems that you intend to connect to your network. So, as well as hitting you for the software license, they also tax the connection of machines to the network.
Gnu/Linux doesn't have these problems. As a Unix clone, it's designed for networking systems together – in fact, it so desperately wants to network that in the absence of a network it will even network with itself using a 'loopback interface'.
Most of the major Linux distributions (see Salvage Server Project Report 1) have built in networking capabilities, but some are better than others. Red Hat is a really good server system, but it's better suited to text-based/command line configuration. SuSE Linux, with it's YaST configuration system, is excellent for less experienced Linux users because of its graphical network configuration tools.
In this report we're assuming that you've got a system installed with Gnu/Linux. Rather than discuss the precise details of configuration (as it's slightly different with different distributions) we'll outline the main principles of network configuration. However, specific parts of this process, like configuring network services, will be tackled as separate Salvage Server Project Reports.

More ambitious networks
Networking a few computers together is relatively easy. But problems will arise when you want to get more ambitious, such as when you want to add more servers for specific tasks, or you want all the systems on the network to share an Internet connection (see diagram right). Simple networks consist of just a hub and PCs. But as the network grows you will need to modify the way the network hardware is organised.
Most likely, you'll want to connect to the Internet. This means that you will have to configure some sort of router/firewall system to manage the flow of data from the network into the phone line/broadband connection. This can be done from the server, but it is very complex. For this reason we'd advise people to use a dedicated machine as a firewall. This can be installed with Linux and configured manually. But what's far easier is to use a very old machine (e.g., a '486 DX4 or an early Pentium-1 P60-P100) installed with Smoothwall. This has a simple installation and maintenance interface that's easy to use for the less geeky Linux user. Installing/using Smoothwall will be the subject of a future Salvage Server Project Report.
Most hubs come as 4-, 5-, 8-, 16- or 24-port units. When you fill all the ports available on the hub you have to get a larger hub, or get a second hub and 'daisy-chain' it. Most hubs have a special port, or a port with a switch beside it, that allow you to plug that port into a port on your main hub. The port then acts as a sort of extension socket for the main hub.
When hubs send data over the network, they send it across all the cables connected to the hub. On networks with lots of machines, or where you have a special server that works with only a few other machines on the network, this can cause a lot of congestion. In these situations you can create small sub-sections within the network using a 'switch' to manage network congestion.
The switch looks and works like a hub. But it monitors network addresses and will route data for the machines connected to it through its own ports. This allows you to manage traffic between different areas of the network without the need to physically split the network using a router. Only data meant for other machines on the network gets routed back to the hub. This helps control congestion on the network. It also means you can set up small areas on the network where people can work at high capacity with servers, or network resources like printers, without drawing down the capacity of the whole network.
Cabling the network
Cabling the network requires a little thought. Firstly, you're going to need one cable for each machine – with sufficient length to snake around from its location to the hub. You can buy network cables of varying lengths from 1 metre up to 100 metres. But the cheapest option if you need quite a few cables is to buy some crimps and RJ-45 connectors and make your own (see box below).
There are restrictions on cabling-up Ethernet networks: Cables should be a minimum of 1 metre long, and a maximum of 100 metres long – in general, the longer the cable runs the more power your hub will have to pump into the network to keep things running.
For 100-meg networks, the cable and the connectors must be rated as 'Category-5' or 'UTP Cat-5' – the old 'Cat-3' cable/connectors are not of sufficient quality to reliably transfer data, especially when the network is busy.
Avoid a large number of tight coils if gathering up cable as this can impede the flow of data (the cable forms a coil which dampens the flow of current).
Avoid running the cables along side mains wiring or electrical ring mains because the Ethernet cables may induce electrical noise that can interfere with radios, stereos and other sensitive equipment connected to that circuit.
If daisy chaining hubs and switches, don't string out more than three hubs/switches in a row – use another port on the main hub to increase capacity instead.
The other problem you may have is with electrical noise. Electric motors, large transformers, power supplies and switchgear/power relays create magnetic fields that can induce or dampen the current flow in the cable. With excessive electrical noise you may find that data transfer is slowed, or is impossible. In these situations you should use the more expensive 'shielded twisted pair' (STP) cable to cut-down the level of interference.
If the cables are installed, and left in-situ, then they should pose no problem. Problems will occur where cables are regularly plugged, unplugged, and moved around. The small clip on the RJ-45 connector is very brittle and can easily be broken off after a long period of repeated unplugging. In these situations just cut off the connector and crimp another on. The metal in the wires within the UTP cable is also very soft – in order to make them flexible. However, repeated bending into tight curves, or stretching the length of the cable, can lead to deformation and eventual breaking of the wire. In these situations, after checking to be sure that the cable is faulty, you'll have to junk the cable and get a new one.
The important thing when cabling any network is to leave slack in all the cables, including the power cable for the hubs/switches. This will ensure that if caught, the cables are not stretched and/or pulled out of their sockets. It's also important to plug the hub power supply into a power socket that's not likely to be unplugged or turned off. If necessary, label the power supply and any switches to ensure that they are not turned-off/removed by accident.
'Thin client' networks
As servers and networks become more powerful, rather than having a very powerful workstations, system engineers run software on the server and have a less powerful terminal to allow the user to access their programs. These systems are called 'thin clients'.
In the trash tech. world, thin clients are also used. Old, less powerful computers can be configured as thin clients, accessing programs on the server at higher speed than if they were run locally. Whilst there's nothing wrong in theory with this model, from an engineering point of view such a model has a 'single point of failure'. If your server goes down, then nothing happens anywhere on the network.
For this reason, especially where there may not be immediate support to repair/reconfigure the server, thin clients should be used with caution. In our view, it's better to have some old machines doing basic functions, like word processing or accessing the Internet, rather than trading off the additional speed in return for a less secure system.
IP numbering and dynamic numbering (DHCP)
Networks use the same type of numbering as the Internet – the 'Transfer Control Protocol/Internet Protocol' (TCP/IP) system. IP numbers are made of 4 bytes – 32 binary digits. To make this more human-friendly these are presented as four decimal numbers – e.g. 192.168.67.1.
To ensure that the numbering of the LAN doesn't conflict with the Internet, there are 'reserved numbers' that should be used to number your network. The number depends upon the type network you are creating: 'Class A' networks – numbers 10.0.0.0 to 10.255.255.255 – 16.7 million possible numbers, for use on large networks.
'Class B' network – numbers 172.16.0.0 to 17.31.255.255 – 1 million possible numbers, for use on medium sized networks.
'Class C' networks – numbers 192.168.0.0 to 192.168.255.255 – 65,536 possible numbers, for use on smaller networks.
Small networks are 'Class C'. IP addresses are used in blocks of 256 numbers. Therefore you would select 192.168.1.X or 102.168.2.X, etc. The 'X' refers to the 'subnet' of numbers, running from 0 to 255 (256 numbers in total). Each subnet has a 'network address' (the server) and a 'broadcast address' (used to contact all clients). Usually, you also have a 'gateway' address where the subnet access other networks or the Internet. So in any subnet there are a possible 252 IP numbers that we can use to connect client machines to the network.
Along with the IP number, there is also a 'netmask'. This is another 4 byte binary number that is used to control how the IP number is interpreted. On any network, the netmask can be used to 'mask' the selection of IP numbers by the NICs. This allows the block of 256 number to be split into smaller networks of 2, 6, 14, 30, 62 or 126 numbers. To do this you have to set up gateways or routers to manage communications between the subnets, and for this reason it's rather complex. Unless you've a good reason to split into smaller subnets, small networks nearly all use the entire block of 256 numbers. Therefore the netmask that you use will be 255.255.255.0.
When planning a network you have to decide the numbering of the server and any clients. By default (although you can change them) the main network addresses are: Server – 192.168.X.1.
Gateway – 192.168.X.254.
Broadcast – 192.168.X.255.
The client machines can have fixed IP addresses. If you use fixed addresses you have to set these individually for each client, and ensure that IP addresses do not conflict with each other. On a small network of a few machines this is not a problem. But as you increase the number of machines it can increase the amount of work if you ever need to reconfigure the network numbering scheme. For this reason larger networks allocate numbers dynamically using the Dynamic Host Configuration Protocol (DHCP) service (the 'dhcpd daemon').
DHCP works as part of the server system. It listens to requests from machines as they boot up, and allocates them an IP number to use. The number allocated are fixed as a range in the configuration file for the DHCP program. For example, if the addresses 192.168.X.100 to 192.168.X.199 are set as the range, it means that DHCP can log on up to 100 machines to the network. DHCP is also useful where people may bring their own computers to your network. Most operating systems automatically configure the use of DHCP so that the system can be logged on to any network that it is connected to.

Domain names and DNS
The Internet uses names to identify different machines. The same can be done on a LAN to allow people to access the functions of a LAN more easily. Also, on a LAN that uses DHCP, often it's easier to use names than numbers because you won't always know what the network numbering scheme is.
Each network can be given a domain name. For example, 'mynetwork.lan'. The purpose of this is to provide a general name to which host names – the names that identify specific machines or services – can be added. For example, if we ran an Intranet to provide a local web service, this could be called 'www.mynetwork.lan'. Note that the '.lan' part is not necessary – but it's very useful yo identify services that form part of the local area network from those of other connected networks or the Internet.
This is implemented via a 'domain name server' (DNS). The DNS system, called Bind, is provided by a daemon, called named, that runs on the network server, or some other server than can be contacted from the LAN. It receives DNS requests from the network for: Forward resolution – turning names into IP numbers.
Reverse resolution – turning IP numbers into names.
To create a DNS server you have to create some files. First you need to edit the files that control the operation of Bind. Then you have to create 'zone files' that provide the details required to provide DNS services. The 'forward zone' files provide the information the hosts that are identified with a particular domain, and the numbers that they respond to. The 'reverse zone' files list the IP numbers, within the local subnet, that identify particular machines.
DNS can be quite complex to set up, and the process varies from distribution to distribution. For this reason it will be covered in greater detail in another Salvage Server Project Report.Sharing printers and files
Linux systems, even clients, operate printers via a 'queue'. Print jobs are received and queued. The printer daemon continually monitors the queue, and when it finds a job waiting, it passes it on to the printer. This means that to create a networked printer you create a print queue on the server, and then set up printer queues on the clients that point or 'forward' to the queue on the server.
The information that you have to supply to the client is the name of the print queue and the type of printer. The name is usually in the form 'printer@server.lan'. You then set the printer information locally in order to ensure that the format of the data sent to the queue is correct for that type of printer. You then give a name to the remote queue in order to differentiate it from other printers that you might configure on the client. You can configure a printer locally, connected directly to the client, in addition to one or more networked printers.
The other main use of a network is to share files. For Linux machines, the simplest way of sharing files is a Networked file System (NFS). NFS reads information from a file called 'exports' (most distributions give you a graphical program to configure this file) that specifies which directories within the local file system will be made available over the network. Each line of the file also allows control over which machines have access, and whether that access is read/write or read-only.
At the client end, NFS requires that you create a directory to form the mount point of the exported directory. The file that controls the mounting of file systems, 'fstab', is then modified to mount the networked directories when the client machine boots up. If you want to connect Windows-based clients to the network you have to configure the Samba service. This is similar to NFS. However, the results can vary depending upon which version of Windows you are using. Sometimes problems with the Windows registry can block full network access. Also, the Samba system requires complex configuration in order to establish Windows 'shares' on the server, and to connect networked printers.
The other option for file transfer is File Transfer Protocol (FTP). This is a service that allows the movement of files between one machine and another using an FTP program. There are two forms of FTP: User-based FTP, which allows file to be moved to a specific user account, and which is password protected for security.
Anonymous FTP, which allows anyone to download/upload files.
How this is configured depends upon the FTP daemon you use. If you use wu-ftpd, just activating the service enables user-based FTP, but not anonymous FTP. If you use pure-ftp, that if configured to automatically provide anonymous FTP, but must be configured for user-based FTP. Configuration details are usually provided with the documentation that accompanies the daemon program.
FTP is useful because it's more efficient at moving huge files than NFS. So if users back-up their data to the server, FTP can provide a quick and easy alternative to NFS. FTP is also useful for Windows machines that don't have access to the network using Samba. Even though the Windows machine is not properly logged onto the network, it still has access to the basic TCP/IP services. This means that the machine can move data on and off its hard disk to the server using FTP.

Intranets and mail
An Intranet, or local web server, is very simple to set up. All you do is install the Apache web server, enable the service, and straight away you should be able to access the test page. All you need do then is replace the test pages with your own web site in order to run your Intranet.
Of course, operating a proper Intranet requires a lot more thought and editing of configuration files. In particular, if you want to use local search engines, or other web based tools, you will have to enable these individually and edit the configuration files as required. But for a simple network, for example where you only require very basic information hosting, just enabling the web server daemon should work OK.
Configuring email is a lot harder. By configuring email the clients on the network can mail each other – which is often the easiest way for users to share information and files. Depending on which mail transport agent (MTA) you use you will have to configure user accounts on the server as well as the MTA. Sendmail is the most complex to configure – mainly because it's such a complex programs. Others, like Exim or Postfix, are easier to configure, but not all Linux distributions install and configure them as standard as part of the installation process.
For uses to check their mail you have to configure a mail delivery agent (MDA). The simplest MDA is the Post Office Protocol (POP). This is provided as part of the 'IMAP' package with most distribution. All you do is install POP and activate the daemon. Then users can check their mail. The Internet Message Access Protocol (IMAP) is also simple to configure. But unlike POP, IMAP allows access to messages stored on the server – so you only download a particular message when you need to read it. However, the trade-off with IMAP is that all the accumulating email can clog up a small hard drive if you use an older machine as a network server.

Networking benefits
This report is just a quick run through of what a network is, and the steps that you need to go through to set one up. But it's also important to understand the benefits of setting up a network.
First an foremost, networking improves the abilities of a computer to store and exchange information. For example, why back-up data to CD? It is easier and faster to back-up files to another machine over a network. If you have more than one machine in use, a network also enables those machines to share resources – like the same Internet connection, or an expensive, high quality laser printer.
The other benefit of a network is that you can run Internet services over your local network. This is useful for training people to use these services without the need to connect to the Internet. It's also useful if you want to develop web sites, along with complex server-side functions, because you can run the system over the network and perfect its operation before uploading it to a live web server.
More than anything, setting up a network, and enabling different services on the network, is an excellent way that a person can improve their knowledge of how computer systems work. This is because networking requires the interaction with hardware, as well as software, in order to get things working. You don't have to learn programming to do this (although it might be useful). But the requirement to interact with the system at the command line, as well as using graphical tools, improve the skills to interact with the computer. Whilst at the same time the need to understand how various service work improves a person's use of networking services generally.
So, there are many benefits of setting up a network. At the general, it allows you to back-up your laptop in case it gets stolen. At the more complex level, the process of setting up a network will improve the skills of those using it.

Tuesday, April 15, 2008

How Do I Share An Internet Connection?

How Do I Share An Internet Connection?So you finally got a high speed Internet connection and you can let that old modem gather dust. But you've got more than one computer, so how do you hook things up so that all of them can share the same connection?
There are two basic ways to share an Internet connection:
• Use the Internet Connection Sharing (ICS) feature that is part of Windows XP. • Use a router (gateway) between your computers and the cable or DSL modem.
Expert Zone columnist Sharon Crawford does an excellent job of describing how to use Internet Connection Sharing in her earlier column, Internet Connection Sharing. I'll describe how to add a router to your network.
Routers, often called gateways, are a way to both isolate and connect one segment of your network from another. In the home environment, they provide a way to separate your home network from the Internet, while at the same time providing a connection point. To your cable company or DSL provider, they make your internal network appear to be a single device, so you don't need to pay extra for additional computers connected to them. Figure 1 shows what your network might look like with a router installed and a couple of computers networked.

Computer Network

A computer network is an interconnected group of computers. Networks may be
classified by the network layer at which they operate according to basic reference
models considered as standards in the industry, such as the four-layer Internet
Protocol Suite model. While the seven-layer Open Systems Interconnection (OSI)
reference model is better known in academia, the majority of networks use the
Internet Protocol Suite (IP).
By scale
Computer networks may be classified according to the scale: Personal area
network (PAN), Local Area Network (LAN), Campus Area Network (CAN),
Metropolitan area network (MAN), or Wide area network (WAN).As Ethernet increasingly is the standard interface for networks, these distinctions
are more important to the network administrator than the user. Network
administrators may have to tune the network, to correct delay issues and achieve
the desired Quality of Service (QoS).
By connection method
Computer networks can also be classified according to the hardware technology
that is used to connect the individual devices in the network such as Optical fiber,
Ethernet, Wireless LAN, HomePNA, or Power line communication.Ethernets use physical wiring to connect devices. Often, they employ the use of
hubs, switches, bridges, and routers.Wireless LAN technology is built to connect devices without wiring. These devices
use a radio frequency to connect.
By functional relationship (Network Architectures)
Computer networks may be classified according to the functional relationships
which exist between the elements of the network, e.g., Active Networking,
Client-server and Peer-to-peer (workgroup) architectures.
By network topology
Computer networks may be classified according to the network topology upon
which the network is based, such as Bus network, Star network, Ring network,
Mesh network, Star-bus network, Tree or Hierarchical topology network, etc.Network Topology signifies the way in which intelligent devices in the network see
their logical relations to one another. The use of the term "logical" here is
significant. That is, network topology is independent of the "physical" layout of the
network. Even if networked computers are physically placed in a linear
arrangement, if they are connected via a hub, the network has a Star topology,
rather than a Bus Topology. In this regard the visual and operational
characteristics of a network are distinct; the logical network topology is not
necessarily the same as the physical layout.
By protocol
Computer networks may be classified according to the communications protocol
that is being used on the network. See the articles on List of network protocol
stacks and List of network protocols for more information. For a development of
the foundations of protocol design see Srikant 2004 [1] and Meyn 2007 [2]
Types of networks:
Below is a list of the most common types of computer networks in order of scale.
Personal Area Network (PAN)
A personal area network (PAN) is a computer network used for communication
among computer devices close to one person. Some examples of devices that
may be used in a PAN are printers, fax machines, telephones, PDAs or scanners.
The reach of a PAN is typically within about 20-30 feet (approximately 6-9
Meters).Personal area networks may be wired with computer buses such as USB[3] and
FireWire. A wireless personal area network (WPAN) can also be made possible
with network technologies such as IrDA and Bluetooth.
Local Area Network (LAN)
A network covering a small geographic area, like a home, office, or building.
Current LANs are most likely to be based on Ethernet technology. For example, a
library will have a wired or wireless LAN for users to interconnect local devices
(e.g., printers and servers) and to connect to the internet. All of the PCs in the
library are connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol
through a system of interconnection devices and eventually connect to the internet.
The cables to the servers are on Cat 5e enhanced cable, which will support IEEE
802.3 at 1 Gbit/s.The staff computers (bright green) can get to the color printer, checkout records,
and the academic network and the Internet. All user computers can get to the
Internet and the card catalog. Each workgroup can get to its local printer. Note that
the printers are not accessible from outside their workgroup.
Typical library network, in a branching tree topology and controlled access to
resourcesAll interconnected devices must understand the network layer (layer 3), because
they are handling multiple subnets (the different colors). Those inside the library,
which have only 10/100 Mbps Ethernet connections to the user device and a
Gigabit Ethernet connection to the central router, could be called "layer 3
switches" because they only have Ethernet interfaces and must understand IP. It
would be more correct to call them access routers, where the router at the top is a
distribution router that connects to the Internet and academic networks' customer
access routers.The defining characteristics of LANs, in contrast to WANs (wide area networks),
include their higher data transfer rates, smaller geographic range, and lack of a
need for leased telecommunication lines. Current Ethernet or other IEEE 802.3
LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer
rate. IEEE has projects investigating the standardization of 100 Gbit/s, and
possibly 40 Gbit/s.
Campus Area Network (CAN)
A network that connects two or more LANs but that is limited to a specific and
contiguous geographical area such as a college campus, industrial complex, or a
military base. A CAN, may be considered a type of MAN (metropolitan area
network), but is generally limited to an area that is smaller than a typical MAN.
This term is most often used to discuss the implementation of networks for a
contiguous area.
Metropolitan Area Network (MAN)
A Metropolitan Area Network is a network that connects two or more Local Area
Networks or Campus Area Networks together but does not extend beyond the
boundaries of the immediate town, city, or metropolitan area. Multiple routers,
switches & hubs are connected to create a MAN.
Wide Area Network (WAN)
A WAN is a data communications network that covers a relatively broad
geographic area (i.e. one city to another and one country to another country) and
that often uses transmission facilities provided by common carriers, such as
telephone companies. WAN technologies generally function at the lower three
layers of the OSI reference model: the physical layer, the data link layer, and the
network layer.
Global Area Network (GAN)
Global area networks (GAN) specifications are in development by several groups,
and there is no common definition. In general, however, a GAN is a model for
supporting mobile communications across an arbitrary number of wireless LANs,
satellite coverage areas, etc. The key challenge in mobile communications is
"handing off" the user communications from one local coverage area to the next.
In IEEE Project 802, this involves a succession of terrestrial Wireless local area
networks (WLAN)
Internetwork
Two or more networks or network segments connected using devices that
operate at layer 3 (the 'network' layer) of the OSI Basic Reference Model, such as
a router. Any interconnection among or between public, private, commercial,
industrial, or governmental networks may also be defined as an internetwork.In modern practice, the interconnected networks use the Internet Protocol. There
are at least three variants of internetwork, depending on who administers and who
participates in them:• Intranet • Extranet • Internet Intranets and extranets may or may not have connections to the Internet. If
connected to the Internet, the intranet or extranet is normally protected from being
accessed from the Internet without proper authorization. The Internet is not
considered to be a part of the intranet or extranet, although it may serve as a
portal for access to portions of an extranet.
Intranet
An intranet is a set of interconnected networks, using the Internet Protocol and
uses IP-based tools such as web browsers, that is under the control of a single
administrative entity. That administrative entity closes the intranet to the rest of the
world, and allows only specific users. Most commonly, an intranet is the internal
network of a company or other enterprise.
Extranet
An extranet is a network or internetwork that is limited in scope to a single
organization or entity but which also has limited connections to the networks of
one or more other usually, but not necessarily, trusted organizations or entities
(e.g. a company's customers may be given access to some part of its intranet
creating in this way an extranet, while at the same time the customers may not be
considered 'trusted' from a security standpoint). Technically, an extranet may also
be categorized as a CAN, MAN, WAN, or other type of network, although, by
definition, an extranet cannot consist of a single LAN; it must have at least one
connection with an external network.
Internet
A specific internetwork , consisting of a worldwide interconnection of
governmental, academic, public, and private networks based upon the Advanced
Research Projects Agency Network (ARPANET) developed by ARPA of the U.S.
Department of Defense – also home to the World Wide Web (WWW) and
referred to as the 'Internet' with a capital 'I' to distinguish it from other generic
internetworks.Participants in the Internet, or their service providers, use IP Addresses obtained
from address registries that control assignments. Service providers and large
enterprises also exchange information on the reachability of their address ranges
through the Border Gateway Protocol (BGP).
Basic Hardware Components
All networks are made up of basic hardware building blocks to interconnect
network nodes, such as Network Interface Cards (NICs), Bridges, Hubs,
Switches, and Routers. In addition, some method of connecting these building
blocks is required, usually in the form of galvanic cable (most commonly Category
5 cable). Less common are microwave links (as in IEEE 802.11) or optical cable
("optical fiber").
Network Interface Cards
A network card, network adapter or NIC (network interface card) is a piece of
computer hardware designed to allow computers to communicate over a
computer network. It provides physical access to a networking medium and often
provides a low-level addressing system through the use of MAC addresses. It
allows users to connect to each other either by using cables or wirelessly.
Repeaters
A repeater is an electronic device that receives a signal and retransmits it at a
higher level or higher power, or onto the other side of an obstruction, so that the
signal can cover longer distances without degradation.
Hubs
A hub contains multiple ports. When a packet arrives at one port, it is copied to all
the ports of the hub. When the packets are copied, the destination address in the
frame does not change to a broadcast address. It does this in a rudimentary way,
it simply copies the data to all of the Nodes connected to the hub.
Bridges
A network bridge connects multiple network segments at the data link layer (layer
2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs
do, but learns which MAC addresses are reachable through specific ports. Once
the bridge associates a port and an address, it will send traffic for that address
only to that port. Bridges do send broadcasts to all ports except the one on which
the broadcast was received.Bridges learn the association of ports and addresses by examining the source
address of frames that it sees on various ports. Once a frame arrives through a
port, its source address is stored and the bridge assumes that MAC address is
associated with that port. The first time that a previously unknown destination
address is seen, the bridge will forward the frame to all ports other than the one on
which the frame arrived.Bridges come in three basic types:1. Local bridges: Directly connect local area networks (LANs) 2. Remote bridges: Can be used to create a wide area network (WAN)
link between LANs. Remote bridges, where the connecting link is slower than the
end networks, largely have been replaced by routers. 3. Wireless bridges: Can be used to join LANs or connect remote stations
to LANs. Switches
Main article: Network switch
A switch is a device that performs switching. Specifically, it forwards and filters
OSI layer 2 datagrams (chunk of data communication) between ports (connected
cables) based on the Mac-Addresses in the packets.[6] This is distinct from a hub
in that it only forwards the datagrams to the ports involved in the communications
rather than all ports connected. Strictly speaking, a switch is not capable of routing
traffic based on IP address (layer 3) which is necessary for communicating
between network segments or within a large or complex LAN. Some switches are
capable of routing based on IP addresses but are still called switches as a
marketing term. A switch normally has numerous ports with the intention that most
or all of the network be connected directly to a switch, or another switch that is in
turn connected to a switch.
"Switches" is a marketing term that encompasses routers and bridges, as well as
devices that may distribute traffic on load or by application content (e.g., a Web
URL identifier). Switches may operate at one or more OSI layers, including
physical, data link, network, or transport (i.e., end-to-end). A device that operates
simultaneously at more than one of these layers is called a multilayer switch.Overemphasizing the ill-defined term "switch" often leads to confusion when first
trying to understand networking. Many experienced network designers and
operators recommend starting with the logic of devices dealing with only one
protocol level, not all of which are covered by OSI. Multilayer device selection is an
advanced topic that may lead to selecting particular implementations, but
multilayer switching is simply not a real-world design concept.
Routers
Routers are networking devices that forward data packets between networks
using headers and forwarding tables to determine the best path to forward the
packets. Routers work at the network layer of the TCP/IP model or layer 3 of the
OSI model. Routers also provide interconnectivity between like and unlike media
(RFC 1812). This is accomplished by examining the Header of a data packet,
and making a decision on the next hop to which it should be sent (RFC 1812)
They use preconfigured static routes, status of their hardware interfaces, and
routing protocols to select the best route between any two subnets. A router is
connected to at least two networks, commonly two LANs or WANs or a LAN and
its ISP's network. Some DSL and cable modems, for home use, have been
integrated with routers to allow multiple home computers to access the Internet.

Thursday, April 10, 2008

How To Assemble And Build A PC

How To Assemble And Build A PC
CPU Thermal compound is not a necessity but it is recommended to keep your CPU cool under load conditions by helping heat dissipate faster. It is a must if you intend to overclock your PC.
Step 1: Installing the motherboardMake sure you have all the components in place and a nice, clean and big enough place to work with.


Put your antic-static wrist strap on to prevent your components from getting affected. Make sure your hands are clean before starting. First we will be installing the motherboard which is a piece of cake to install.
Open the side doors of the cabinet Lay the cabinet on its side Put the motherboard in place Drive in all the required screws Tip: Most motherboards come with an antistatic bag. It is advisable to put the motherboard on it for some time and then remove it from the antistatic bag before placing it in the cabinet.

Step 2: Installing the CPU CPU is the heart of a computer so make sure you handle it properly and do not drop it or mishandle it. Also try not to touch the pins frequently so that they do not get dirty. Get hold of your motherboard and CPU manual. You need to place the CPU on the dotted white patch of the motherboard in a particular fashion for it to fit properly. There is a golden mark on the CPU to help you assist. Consult both your motherboard and CPU manual to see which position it fits exactly or you could also use try all the 4 positions.

Lift the CPU lever on the motherboard Place the CPU properly on the motherboard Pull down the lever to secure the CPU in place Warning: Do not try to push the CPU into the motherboard!
Got the thermal compound? Now is the time to use it. Take small amount of it and carefully apply it on the top surface of the processor. Be careful not to put it on the neighboring parts of the motherboard. If you do so clean it immediately using the cloth.
Tip: Thermal compounds should be changed once every six months for optimal performance.

Step 3: Installing the heat sinkAfter installing the processor we proceed to installing the heat sink. There are different kinds of heat sinks that are bundled with the processor and each has a different way of installation. Look into your CPU manual for instructions on how to install it properly.
Place the heat sink on the processor Put the jacks in place Secure the heat sink with the lever After this you will need to connect the cable of the heat sink on the motherboard. Again look into the motherboard manual on where to connect it and then connect it to the right port to get your heat sink in operational mode.





Step 4: Installing the RAMInstalling the RAM is also an easy job. The newer RAMs ie. DDR RAMs are easy to install as you don’t have to worry about placing which side where into the slot. The older ones, SDRAMs are plagued by this problem.










If you want to use dual channel configuration then consult your manual on which slots to use to achieve that result.
Push down the RAM into the slot Make sure the both the clips hold the RAM properly



Step 5: Installing the power supplyWe will now install the power supply as the components we install after this will require power cables to be connected to them. There is not much to be done to install a PSU.






Place the PSU into the cabinet Put the screws in place tightly Tip: Some PSU have extra accessories that come bundled with it. Consult your PSU manual to see how to install them.

Step 6: Installing the video cardFirst you will need to find out whether your video card is AGP or PCI-E. AGP graphics cards have become redundant and are being phased out of the market quickly. So if you bought a spanking new card it will certainly be a PCI-E.














step 7: Installing the hard diskHard disk is another fragile component of the computer and needs to handled carefully.





Place the hard drive into the bay Secure the drive with screws Connect the power cable from PSU Connect the data cable from motherboard into the drive If your hard drive is a SATA one then connect one end of SATA cable into the motherboard and other into the SATA port on the hard disk. If your hard disk is PATA type then use the IDE cable instead of the SATA cable.
Tip: If your PSU does not support SATA power supply then you will need to get an converter which will convert your standard IDE power connector to a SATA power connector.





Step 8: Installing optical driveThe installation an optical drive is exactly similar to an hard drive.
Place the optical drive into the bay Drive in the screws Connect the power cable and data cable Tip: When installing multiple optical drives take care of jumper settings. Make sure you make one as primary and other slave by using the jumper. This is not applicable if the drives are SATA drives.




Step 9: Connecting various cablesFirst we will finish setting up internal components and then get on to the external ones. You will need to consult your motherboard manual for finding the appropriate port for connecting various cables at the right places on the motherboard.
Connect the large ATX power connector to the power supply port on your motherboard Next get hold of the smaller square power connector which supplies power to the processor and connect it to the appropriate port by taking help from your motherboard manual Connect the cabinet cables for power,reset button in the appropriate port of the motherboard Connect the front USB/audio panel cable in the motherboard Plug the cable of cabinet fans You are done with installing the internal components of the PC. Close the side doors of the cabinet and get it upright and place it on your computer table. Get the rest of the PC components like monitor, keyboard, mouse, speakers etc. which we will connect now.





Connect the VGA cable of the monitor into the VGA port If mouse/keyboard are PS/2 then connect them to PS/2 ports or else use the USB port Connect the speaker cable in the audio port Plug in the power cable from PSU into the UPS Also plug in the power cable of the monitor You are now done with setting up your PC. Power on and see your rig boot to glory.


Step 10: Installing the OS and driversWe are done with the hardware part. Now get your favorite OS disks ready and the CD that came with your motherboard.
Set the first boot device to CD/DVD drive in BIOS Pop in the OS disk Reboot the PC Install the OS Install drivers from motherboard CD (applicable only to Windows OS) Voila! You have your PC up and running. Enjoy your journey with your self assembled rig!
Jargon BusterCPU - Central Processing Unit RAM - Random Memory Access DDR -Double Data Rate SDRAM - Synchronous Dynamic Random Access Memory PSU -Power Supply Unit AGP - Accelerated Graphics Port PCI-E - Peripheral Component Interconnect- Express SATA - Serial Advanced Technology Attachment PATA -Parallel Advanced Technology Attachment IDE - Integrated Drive Electronics ATX - Advanced Technology Extended USB - Universal System Bus VGA - Video Graphics Array PS/2 - Personal System/2 OS - Operating System

How To Assemble A Desktop PC/Silencing

How To Assemble A Desktop PC/Silencing

How To Assemble A Desktop PC
In contrast to overclocking, you may prefer to silence your computer. Some high-performance PCs are very loud indeed, and it is possible to reduce the noise dramatically. The main sources of noise are: Fans (CPU, case, power supply, motherboard, Graphics card), and Hard disks. While total silence in a PC is possible, it is far cheaper and easier to aim for something 'virtually inaudible'.

Note that quieter computers sometimes run slightly hotter—especially in small form factor (SFF) systems, so you need to monitor carefully what you do. Usually you can't overclock and silence at the same time (although it is possible with the right CPU and cooling techniques). Sometimes CPUs are underclocked and fans are undervolted to achieve greater silence at the expense of performance.

Designing a powerful and quiet machine requires careful consideration in selecting components, but need not be much more expensive than a normal, loud PC. If you are looking to quiet an existing PC, find the offending component that produces the loudest or most irritating noise to replace first, and work down from there.

Fans

In general, large diameter (120 mm), high quality fans are much quieter than small diameter ones, because they can move the same amount of air as 80mm or 92mm fans, but at slower speeds. Temperature-regulated fans are also much quieter, as they will automatically spin at a reduced speed when you computer is not in heavy use. Wire mesh grills (or no grill at all) allow better airflow than the drilled holes used in many cases.

CPU

Modern CPUs can generate a lot of heat in a very small area—sometimes as much as 100-watt lightbulb! For the vast majority of processors, a dedicated fan will be a necessity. There are some, like VIA processors, that require only a heat sink, but you will not find passively cooled CPUs at nearly the same speeds allowed by active cooling. However, for modern computers, CPUs are not the limiting component for speed in daily tasks, so unless you do demanding 3D gaming or video editing, then a passively cooled processor may be just for you. They would also be very attractive in media-center PCs, or other specialized applications where computer noise would be more noticable.
The noisiest fan is usually the CPU fan: the Intel-supplied fan-heatsinks are particularly loud, although they do provide good cooling. Some BIOSs allow you to slow the CPU fan down automatically when it is not too hot - if this option is available, turn it on. Also, you can get 3rd party coolers, which are designed to be less noisy: for example, those made by Zalman.

Power Supply (PSU)

Noisy power-supplies simply have to be replaced with quieter ones. The case fans can be slowed down by using fan-speed controllers, or resistors (but beware of insufficient cooling). Motherboard and lower-end graphics-card fans can usually be replaced with a small, passive heatsink

Video Card

A graphics card with an active fan-cooling is very common in gamer PCs. Since 2004 most of these cards can adjust their performance so the fan can slow down if no 3D accelleration is needed. As you will loose warranty if you change the fan you should check (through reviews) if the card is a noisy one. If you insist on exchanging the cooling device be sure the card is compatible with the new one.
After a few weeks, dust and debris can accumulate on fan blades. Dust on PC components can act as an insulator, trapping in heat and forcing your fans to spin at higher speeds to keep everything cool. Keep your PC clean to reduce noise and increase efficiency.

Water cooling

An efficient, if expensive way to eliminate the need for most fans in ones computer system is the implementation of water cooling devices. Water cooling kits are available for beginners, and additional components or "water blocks" can be added to the system, allowing virtually any system needing cooling to be put "on water".

Most water cooling systems are not fanless as the radiator component still needs to spread the heat. There are fanless solutions but they need to be placed exterior of the pc case making the computer less transportable.

Other cooling fluids are possible in a sealed system, although plain water is generally preferred because it has higher heat capacity and thermal conductivity than oil, and it is easier to clean up if a leak ever occurs: turn off the computer, shake off most of the water, and use a hair dryer to evaporate the rest of the water.

Oil cooling

Transformer oil has been used to cool electrical equipment for decades.

Some people are experimenting with oil cooling personal computers. Since oil is non-conductive, the motherboard and graphics card and power supply (but not the hard drives or optical drives!) will continue to run submerged in a "fishtank" filled with oil. Some people prefer colorless transparent "mineral oil" or cooking oil, but Frank Völkel recommends motor oil[1].

Oil cooling is lower cost than water cooling, because it doesn't require water-tight "blocks" or hoses. Some people leave the fans running on the motherboard and power supply to "stir" the oil. Other people remove all the fans and add a (submerged) pump to "blow" a stream of oil onto the CPU hot spot. Some CPUs, if given a big enough metal heat sink, can be adequately cooled by passive convection currents in the oil (and the large surface area of the oil-to-case and case-to-air), without any fans or pumps.

If any cable (the hard drive ribbon cable, the power cable, the monitor cable, etc.) exits the case below the oil line, it must have an oil-tight exit seal -- consider making all cables exit the top of the case instead.

Immersion in other cooling fluids has been attempted, such as fluorinert or liquid nitrogen


Hard disk

A 'resting' hard disk is generally quite quiet compared with any fan, but increases dramatically when it starts 'churning', as when you open or save a file or perform a virus scan. As most hard drive manufacturers place capacity and performance ahead of noise, it is recommended that you look for a hard drive with good acoustics to start with. SilentPCreview.com does comprehensive testing, so picking any of their recommended drives will serve you well. There will usually be a compromise between performance and sound, so opting for a slower RPM or smaller capacity single-platter HDD may be necessary to reach very quiet levels. Also, 2.5" notebook drives can be much quieter than any 3.5" desktop drive, but are more expensive and come in smaller capacities.

After selecting a quiet drive, or if you want to reduce the noise coming from a loud drive, look into mounting options. Hard drives are usually mounted with four screws attaching them directly to the case, providing very stable support, some heat dissipation and a lot of direct transmission of HDD vibrations to the case. Reducing this transmission to almost nothing is possible, though it is not always easy.

But do ensure sufficient cooling of the hard drive: running a hard drive moderately hot can reduce its lifespan to under a year! Some mounts are designed to provide both extra cooling and silencing, such as the heat-pipe coolers. Spinning the HDD down when not in use will also reduce noise, but it can reduce the life of the drive by increasing the number of landings and take-offs performed by the read/write heads.

The best noise reductions come from suspending the hard drive with elastic, providing no direct route for sound transmission to the case. You can make your own from elastic in a fabric store, or buy kits that provide materials and instructions. (Rubber bands are not recommended, as they will become weak from the HDD heat and snap.)

Foam can be used to dampen vibrations, but may trap more heat than is safe. Resting the hard drive on the floor of your case on a bed of foam can be very effective at reducing noise.

Using silicone or rubber screws instead of metal mounting screws will give you marginal sound reduction, but is easiest and cheapest to implement. You also won't have to worry about shifting of the HDD if you move your computer.

A software tool created by Maxtor exists which can adjust a hard disk's noise/performance ratio to what your system requires. The technique is called acoustic management. However, only certain drives currently support this feature. You can read more from the "Definitive Maxtor Silent Store Guide" and get the tool from Maxtor.

Completely silent computers will need to use solid state memory like flash drives or eeprom, which have no moving parts and make no noise. This is more expensive and has less capacity than a normal hard drive, so it can't be considered a mainstream storage solution, yet could suffice for a web-browsing PC. At the moment, hard drives are the only practical storage solution except in very specialised circumstances, though this will likely soon change with the rapidly dropping price of flash memory.


Other

Steel cases are quiter than aluminum ones, because the denser material vibrates less easily.
Quiet cases are available, containing noise-damping acoustic foam. There are 3rd-party acoustic foams that you may decide to add as well.
Experiment with rubber or foam washers when mounting drives and fans. These will dampen any vibration these devices cause.
Keep cables tied up and neat. Not only will this keep them clear of fans (which could quickly cause dangerous heat build-up), but the reduced impedence of airflow throughout your case will make things cooler. Flat, ribbon-shaped cables can safely be folded up to a fraction of their original width.
Make sure your case has rubber or foam feet if it rests on a hard surface. Placing it on carpeting will also reduce vibrations.
Underclocking will reduce system performance, but you can also then reduce the CPU voltage, and power consumption as a whole. Noisy fans may then also be operated at reduced speed or eliminated altogether, as the computer will produce less heat. The converse of the diminishing-returns law for overclocking is that underclocking can prove surprisingly effective.
The really obvious, but surprisingly effective: keep the computer under your desk or even in a closed cupboard, rather than under or beside your monitor.
NOTE: No matter what technique you use to quiet the machine, be sure to keep a steady supply of fresh air over all components. Don't put your machine in a closed cupboard unless you are sure heat will not be an issue. If you use acoustic foams, be sure they aren't acting as insulators, too - and keeping components hot.

How To Assemble A Desktop PC/Overclocking

How To Assemble A Desktop PC/Overclocking

How To Assemble A Desktop PC
Overclocking is the practice of making a component run at a higher clock speed than the manufacturer's specification. The idea is to increase performance for free or to exceed current performance limits, but this may come at the cost of stability.

Overclocking is like souping up a car, if you just want to get where you're going, there's no need for it. But it is fun and educational and can get you a machine that provides performance all out of proportion to its cost.

Think of the 3GHz on your new 3GHz Pentium 4 as a speed limit asking to be broken. Some other components in your computer can also be overclocked, including RAM and your video card in many cases. Over clocking is possible because of the way electronic parts, especially VLSI (Very Large Scale Integration) chips are made and sold. All processors in a given line, the Pentium 4 for example, are made the same way, on a large die that is cut up into individual processors, those processors are then tested and graded as to speed, the best chips will be marked as 3.0 GHz the second best 2.8 etc. As time goes by and production processes and masks improve, even the lower rated chips may be capable of faster speeds, especially if vigorous cooling is implemented. Also many manufacturers will mark chips that test faster at slower speeds if there is higher demand for the lower end component.

It’s important to note that not every chip will be overclockable; it’s really the luck of the draw. Some companies that sell ‘factory overclocked’ systems engage in a practice called “binning” where they buy a number of processors, test them for overclocking potential and throw the ones that don’t overclock in a bin to be resold at their rated speed. Even with processors that have a reputation for overclocking well, some parts simply will not exceed their rating.

That said, effective cooling can give a boost to a chips overclockability. With luck you will be able to get extra performance out of your components for free. With luck and skill you can get performance that is not possible even when using the top of the line components. Sometimes you can buy cheaper parts, and then OC them to the clock speed of the higher end component, though the cost of extra cooling can compromise any money you may be saving on the part, not to mention warranty and part life issues.

WARNING: OVERCLOCKING MAY VOID THE WARRANTY ON THE PARTS BEING OVERCLOCKED. DOING SO MAY ALSO CAUSE SYSTEM INSTABILITY, AND MAY ALSO CAUSE DAMAGE TO COMPONENTS AND DATA. REMEMBER THE 3 "C'S" WHEN OVERCLOCKING: CAREFUL, CONSERVATIVE, and CAUTIOUS.



Tuesday, April 8, 2008

Steps to enable JavaScript in Microsoft Internet Explorer 5.x or 6.x


Please enable JavaScript in your IE 5.x + browser, to submit your application form properly.

To enable JavaScript in Microsoft Internet Explorer 5.x or 6.x, perform the following steps:


1,From the Tools menu, click Internet Options




2,From the Security tab, click Custom Level.




3,Scroll to Java permissions, click to select High safety.









4,Click OK.




5,Click OK.










6.From the File menu, click Close.





7,Relaunch your browser.


Your browser stores many of its settings. The best way to ensure changes to core function settings to take effect is to close all browser windows completely, then relaunch it.

Test your Internet connection speed

Test your Internet connection speed at Speedtest.net

Saturday, April 5, 2008



Your Ad Here

Friday, April 4, 2008

Geojit is a member of the Cochin Stock Exchange.


Mr. C.J. George and Mr. Ranajit Kanjilal founded Geojit as a partnership firm in the year 1987. In 1993, Mr. Ranajit Kanjilal retired from the firm and Geojit became a proprietary concern of Mr. C .J. George. In 1994, it became a Public Limited Company by the name Geojit Securities Ltd. The Kerala State Industrial Development Corporation Ltd. (KSIDC), in 1995, became a co-promoter of Geojit by acquiring 24% stake in the company, the only instance in India of a government entity participating in the equity of a stock broking company. Geojit listed at The Stock Exchange, Mumbai (BSE) in the year 2000. In 2003, the Company was renamed as Geojit Financial Services Ltd. (GFSL). The board of the company consists of professional directors; including a Kerala government nominee with 2/3rd of the board members being Independent Directors. With effect from July 2005, the company is also listed at The National Stock Exchange (NSE). Geojit is a charter member of the Financial Planning Standards Board of India and is one of the largest DP brokers in the country.

Overseas Joint Ventures
Barjeel Geojit Securities, LLC, Dubai, is a joint venture of Geojit with Al Saud Group belonging to Sultan bin Saud Al Qassemi having diversified interests in the area of equity markets, real estates and trading. Barjeel Geojit is a financial intermediary and the first licensed brokerage company in UAE. It has facilities for off-line and on-line trading in Indian capital market and also in US, European and Far-Eastern capital markets. It also provides Depository services and deals in Indian and International Funds. An associate company, Global Financial Investments S.A.O.G provides similar services in Oman.
Aloula Geojit Brokerage Company, is Geojit’s recently promoted joint venture in Saudi Arabia with the Al Johar Group. Saudi is home to the world’s single largest NRI population. The new venture is expected to start operations in the latter half of 2008. The Saudi national and the NRI would be able to invest in the Saudi capital market. The NRI would also be able to invest in the Indian stock market and in Indian mutual funds. This joint venture makes Geojit the first Indian stock broking company to commence domestic retail brokerage operations in any foreign country.
Overseas Business Association
Bank of Bahrain and Kuwait (BBK), one of the largest retail banks in Bahrain & Kuwait through its NRI-Business, and Geojit entered into an exclusive agreement in September 2007. This association will provide the bank’s sophisticated client base , the opportunity to diversify their holdings through investments in the Indian stock market. Services offered are- Investment Advisory, Portfolio Management, Mutual Funds, Trading in Indian Equity Market, DEMAT and Bank account, Offline Share Transactions and PAN Card.
Bahrain Location



Our Branch Near Chengannur



Geojit, 1st Floor, Kochuputhenpurakal Complex , Engineering College Junction , Court Road , Chengannur - 689 121

Tele No:
0479 -2457545 / 3295001, 94479 71343
E-Mail :
chengannur@geojit.com

KERALA EXAMINATION RESULTS 2011

HSE RESULTS 2011 Examination Results March- 2011 Click here to download:- Click Vocational Higher Secondary Examination Results 2011 Click h...