How x assembled a home network server. Operating systems for NAS server


Never before has the problem of file storage been as acute as it is today.

Appearance hard drives with a capacity of 3 and even 4 TB, Blu-Ray discs with a capacity of 25 to 50 GB, cloud storage- does not solve the problem. There are more and more devices around us that generate heavy content around us: photo and video cameras, smartphones, HD television and video, game consoles, etc. We generate and consume (mostly from the Internet) hundreds and thousands of gigabytes.

This leads to the fact that the average user’s computer stores a huge number of files, hundreds of gigabytes in size: a photo archive, a collection of favorite films, games, programs, work documents, etc.

All this needs to not only be stored, but also protected from failures and other threats.

Pseudo-solutions to the problem

You can equip your computer with a high-capacity hard drive. But in this case, the question arises: how and where to archive, say, data from a 3-terabyte disk?!

You can install two disks and use them in RAID “mirror” mode or just regularly perform backup from one to another. This is also not the best option. Let's say your computer is attacked by viruses: most likely, they will infect the data on both disks.

You can store important data on optical discs by organizing a home Blu-Ray archive. But it will be extremely inconvenient to use.

Network storage is the solution to the problem! Partly...

Network attached storage (NAS) - network file storage. But it can be explained even simpler:

Let's say you have two or three computers at home. Most likely, they are connected to a local network (wired or wireless) and to the Internet. Network storage is specialized computer, which is built into your home network and connects to the Internet.

As a result of this, the NAS can store any of your data, and you can access it from any home PC or laptop. Looking ahead, it is worth saying that the local network must be modern enough so that you can quickly and easily “pump” tens and hundreds of gigabytes through it between the server and computers. But more on that later.

Where can I get a NAS?

Method one: purchase. A more or less decent NAS with 2 or 4 hard drives can be purchased for $500-800. Such a server will be packaged in a small case and ready to work, as they say, “out of the box”.

However, PLUS the cost of hard drives is added to these 500-800 dollars! Since NAS are usually sold without them.

Pros: you get a ready-made device and spend a minimum of time.

Disadvantages of this solution: NAS costs as much as desktop computer, but at the same time has incomparably less capabilities. In fact, this is just a network external drive for a lot of money. For quite a lot of money you get a limited, unprofitable set of features.

My solution: DIY!

This is much cheaper than buying a separate NAS, although it takes a little longer because you assemble the machine yourself). However, you get a full home server, which, if desired, can be used in the full range of its capabilities.

ATTENTION! I strongly do not recommend building a home server using old computer or old, worn-out components. Do not forget that file server- this is your data storage. Do not skimp on making it as reliable as possible, so that one fine day all your files do not “burn out” along with hard drives, for example, due to a failure in the power supply circuit of the motherboard...

So, we decided to build a home file server. A computer whose hard drives are available on the home local network for use. Accordingly, we need such a computer to be energy efficient, quiet, compact, not emit a lot of heat, and have sufficient performance.

Based on this, the ideal solution is a motherboard with a built-in processor and passive cooling, compact in size.

I selected the motherboard ASUS C-60M1-I . It was purchased from the online store dostavka.ru:

The package includes a high-quality user manual, a driver disk, a sticker for the case, 2 SATA cables and a rear panel for the case:

ASUS, as always, equipped the board very generously. You can find the full board specifications here: http://www.asus.com/Motherboard/C60M1I/#specifications. I will only talk about some important points.

With a cost of just 3300 rubles- it provides 80% of everything we need for the server.

There is a dual-core processor on board the board AMD C-60 with built-in graphics chip. The processor has a frequency 1 GHz(automatically can increase to 1.3 GHz). Today it is installed in some netbooks and even laptops. Intel Atom D2700 class processor. But everyone knows that Atom has problems with parallel computing, which often reduces its performance to zero. But the C-60 does not have this drawback, and in addition is equipped with quite powerful graphics for this class.

There are two memory slots DDR3-1066, with the ability to install up to 8 GB of memory.

The board contains 6 ports on board SATA 6 Gbit. This allows you to connect as many as 6 disks (!) to the system, and not just 4, as in a regular NAS for home.

What's MOST important?- the board is built on the basis UEFI, and not the usual BIOS. This means that the system will be able to work normally with hard drives larger than 2.2 TB. She will “see” their entire volume. BIOS-based motherboards cannot work with hard drives larger than 2.2 GB without special “crutch utilities”. Of course, the use of this kind of utilities is unacceptable if we are talking about the reliability of data storage and servers.

The C-60 is a fairly cold processor, so it is cooled using only one aluminum radiator. This is enough so that even at the moment of full load the processor temperature does not rise above 50-55 degrees. Which is the norm.

The set of ports is quite standard, the only disappointment is the absence of the new USB 3.0. And I especially want to answer the presence of a full-fledged gigabit network port:

On this board I installed 2 modules of 2 GB DDR3-1333 from Patriot:

Windows 7 Ultimate was installed on HDD WD 500GB Green, and for data I purchased a 3 TB Hitachi-Toshiba HDD:

All this equipment is powered by a 400-watt FSP power supply, which, of course, is with a reserve.

The final step was to assemble all this equipment into a mini-ATX case.

Immediately after assembly, I installed Windows 7 Ultimate on my computer (the installation took about 2 hours, which is normal, given the low speed of the processor).

After all this, I disconnected the keyboard, mouse and monitor from the computer. In fact, there was only one system unit left connected to the local network via cable.

It is enough to remember the local IP of this PC on the network in order to connect to it from any machine through the standard Windows “Remote Desktop Connection” utility:

I deliberately did not install specialized operating systems for organizing file storage, such as FreeNAS. Indeed, in this case, there would be little point in assembling a separate PC for these needs. You could just buy a NAS.

But a separate home server that can be loaded with work overnight and left is more interesting. In addition, the familiar Windows 7 interface is easy to manage.

In total, the total cost of a home server WITHOUT hard drives was 6,000 rubles.

Important addition

When using any network storage Network bandwidth is very important. Moreover, even a regular 100 Megabit cable network does not delight you when, say, you are archiving from your computer to a home server. Transferring 100 GB over a 100 Megabit network takes several hours.

What can we say about Wi-Fi. It’s good if you use Wi-Fi 802.11n - in this case, the network speed is around 100 Megabits. What if the standard is 802.11g, where the speed is rarely more than 30 Megabits? This is very, very little.

The ideal option is when interaction with the server occurs via cable network Gigabit Ethernet. In this case, it's really fast.

But I will tell you how to create such a network quickly and at minimal cost in a separate article.

It happens that hobbyists and IT professionals create data centers in their homes, placing equipment in makeshift server rooms, garages, basements or home offices. Such people are called server huggers. These are people who want to be closer to the equipment they use.

Home data centers, or “cave data centers” as they are called, play an important role in modern life and the development of IT technologies. Data DCs, as opposed to large-scale data centers, are pioneers in testing server equipment. These types of enthusiasts have a passion for IT and, as a rule, are among the first to start using new server systems, testing them under extreme conditions.


The reasons for creating such data centers are varied. For example: small web hosting, launching your own unique projects, or just a passion for electronics and IT. Whatever the motivation, such a project in any case requires some adaptations, including both modification of nutrition and network connections, as well as adapting and updating cabling throughout the house. We offer you several examples of such projects.

Cloud in the basement

Canadian IT specialist Alain Boudreault has in his arsenal enterprise-class equipment from manufacturers such as Dell, HP, Sun, Juniper, IBM and IBM BladeCenter. He placed racks with this equipment in the basement of his house. His website contains detailed review installation, including diagrams of all components. The data center includes an Open Stack MAAS (Metal as a Service) cloud and several data storage systems (ISCSI and Fiber Channel).

“My first step was to install an electrical substation that could provide 40 amperes of current at 240 volts, i.e. capable of handling a load of 9.6 kW/hour if necessary,” says Alan. He teaches application development and uses his DC for testing. “The servers are rarely running all at once, so the average consumption is 1-2 kW per hour,” he says. The cost of electricity in Quebec is about 7 cents per kW. Boudreau writes that this type of DC is not for the faint of heart.

Data Center - YouTube Star

Some home DC owners post videos about them on YouTube. The most popular of these is the Home Center Data Project, another Canadian project that began in 2005 with two computers in a closet and grew to over 60 servers by 2013. The project was documented in a series of videos that have received over 500,000 views on YouTube. The videos and website also document extensive cabling, cooling, and network infrastructure improvements.

“This project was not developed for profit,” writes developer Etienne Vailleux of Hyperweb Technologies. “This installation was built as a hobby, but after a while it quickly grew into a passion.”

In 2013, the project migrated from one house to another. “Part of the foundation was specifically designed to accommodate servers and air conditioners,” Valle said. “The project currently contains 15 servers with a connection capacity of 60 Mbit/s.”

Surprising your provider

Sometimes people install entire data warehouses. For example, in 2012, an IT professional known as houkouonchi posted a video of his rack that received over 220,000 views on YouTube.

“The installation was not actually done in a data center. Not many have a full-size rack capable of storing more than 150TB of data,” he wrote. “The post is anchored through the wood floor into the cement foundation of the house. A fully equipped rack uses only 1 kW of power, but its throughput is a completely different matter.”

In 2013, houkouonchi said he was contacted by Verizon, which was surprised to see a home Internet user generating more than 50 terabytes of traffic per month. Hosting a server with such a large traffic generation violated the terms of service for the home Internet service, and he was forced to switch to a business plan.

Here is a tour of the houkouonchi stand posted on YouTube

Stands from IKEA

Why use standard data center racks in your home when you can store the equipment in a stylish table from IKEA? In one home data center implementation, hobbyists adapted a Swedish LACK side table to comfortably host servers and network equipment, creating LACKRack. It turns out that the space between the legs is 19 inches. The width of a standard slot in specialized server racks is the same. Improvised units were created using corners screwed to the table legs.

The absence of racks stimulated the emergence of many design innovations. Frank Dennemen, technology evangelist at PernixData, adapted LACKRack's initial specification to create a portable 19-inch rack.

“My home office is designed to be an office, not a data center,” Denneman writes. “So I tried to accommodate 19-inch server racks without ruining the home office aesthetic.” You can place this rack anywhere in the house.

What a proper mini-server should look like

So, you are the head of a company and have decided to organize a server room in the office, or simply an enthusiast who has decided to try everything in life and build a server room at home, in the basement or in the garage. It is not so important why you need a server room, if you have already decided to create such a miracle, you need to know what it should look like. Ideally, the server room should comply with the TIA-569 standard. The list of requirements for a room allocated for a server room looks something like this:
  • the minimum area should be 12 sq.m, and the ceiling height should be at least 2.44 m;
  • the room should not be decorated with flammable materials;
  • there must be at least one double grounded socket in the room, and if you strictly follow the standard, in a room with an area of ​​12 square meters there should be 4 such sockets;
  • the server room should be located away from strong sources electromagnetic radiation(a server room at a distance of 2 m from the transformer booth is a bad idea);
  • It is recommended to use halogen lamps for lighting in the room; “economical lamps” are also suitable, which will provide minimal heat generation, good lighting and a long service life;
  • The humidity in the room should be 30-55% at a temperature of 18 to 24 degrees Celsius.
Requirements for the rack itself:
  • The rack width must be 19 inches (482.6 mm);
  • the depth is selected depending on the equipment used: 60, 80 or 90 cm;
  • mounting holes are located on the vertical members of the rack every 1.75 inches (4.4 cm);
  • The height of standard racks is 6.12, 20, 42 U, etc.
The optimal temperature for server operation is 20 degrees Celsius. This does not mean that the entire room should be exactly this temperature; it should at least be in the rack with the servers. And if you can’t afford to buy a server cabinet with cooling or an air conditioner, then you can decide this problem using regular home fans, being creative and putting them together into a rack that will perform the same function as a cooling door in a server cabinet.

To minimize possible problems with power it is worth using a source uninterruptible power supply. A 5-6 kVA UPS is best suited, but if your budget is limited or you do not plan to use powerful equipment in large volumes, but if you want, for example, only 3 or 4 machines, you can use a weaker UPS at your own peril and risk. Sales consultants can offer you UPS with different topologies: Line-Interactive UPS, Passive Standby (Off-Line) UPS and Double-Conversion (On-Line) UPS. Manufacturers claim that Line-Interactive UPS is the best option for home servers, but this is not entirely true. Still, the best option is a UPS with On-Line UPS topology (continuous operation), for example, APC Smart-UPS RT 5000VA.

These tips are the main things you need to know to set up a server room and comfortable work Your equipment.

I would like to add that no matter how tempting the prospect of hosting servers at home may be, it is worth remembering and understanding that it will be quite difficult to achieve acceptable uptime at home. As an example, I’ll give you a few words from the forum.

There will not be uptime even close to 5 minutes per year due to the human factor. This is what happened in my practice and led to such a low uptime:
  • forgot to pay for internet;
  • The screw in the server died, there was no raid;
  • clogged with dust - needs to be cleaned;
  • the fan is dead - it needs to be replaced;
  • touched a wire;
  • the server was used as a file dump, the space ran out because several films were recorded;
  • The Linux kernel was not updated, the computer did not reboot;
  • the provider suddenly changed the DNS settings;
  • the light blinked;
  • parents decided to wash the floors;
  • the sata cable fell out of the connector;
  • The wi-fi card that distributed the Internet suddenly froze and brought the computer into a stupor, etc.

It is also necessary to remember that:

  • this is not a cheap idea;
  • It will be difficult for you to ensure optimal conditions for the servers to operate;
  • in case of fire, etc. You risk not only data and equipment, but also your living space;
  • Constantly monitoring the operation of servers is only possible if you are always within easy reach of your server room;
  • during long trips, your project will be frozen, since leaving the equipment turned on without review is a big risk.
Renting equipment from a DC is much more feasible. There your equipment is always under supervision. In addition, DCs are built exclusively for servers and are maximally optimized for comfortable operation of the equipment. You are freed from the need to buy expensive equipment, and pay for hardware only when you need to use it. Also, rented equipment provides for the presence of a number of specialists for quick solution problems. And the most important advantage: your equipment may be located outside the country of your residence, which in turn can protect you from the visit of unexpected guests.

10 signs that you are a serverophile

  • You prefer large premises with air conditioning and a minimum of upholstered furniture, as well as fluorescent lighting;
  • constant hum and white noise soothe you;
  • using a digital fingerprint or manual biometrics to gain access to rooms is still of interest to you;
  • You can't walk past messy or disorganized cable connections without voicing disapproval and shaking your head;
  • a flashing green or yellow light has a calming, almost hypnotic effect when you look at it;
  • You like the feeling of cold from metal racks, you often want to touch them;
  • You think that the cloud is the same as virtualization, something worth looking into someday, but for now it needs to solve current user problems;
  • You believe that cloud data is not secure, no matter what the provider tells you;
  • You have your own thoughts on how to improve the operation of equipment through direct influence (for example, replacing elements);
  • You have a passion for computer hardware, always trying to improve it and find innovative solutions for optimal performance.
Do you know examples of cave DCs? Perhaps such craftsmen live in your house, or have you installed a small stand yourself? Share your experience.

Hi all! The idea of ​​building my own home server came to me quite a long time ago, but due to various reasons was constantly postponed. Finally, I decided it was time to act.

There will be quite a lot of material and description of the whole process, so I will make several articles, in each of which I will try to describe in detail all the steps so that even a beginner can cope with this in the future. Do not miss!

In general, I am also a newbie in this matter, so I will figure everything out as I go. There are many issues to be resolved, from the choice of components and operating system to solving minor technical problems.

What is a home server?

At its core, it is a regular computer that performs tasks for which using your main computer is impractical. It must work and be accessible over the network 24 hours a day, 7 days a week, while being cool, quiet and quite economical in terms of energy consumption.

Home server tasks

  • Storage and backup of important files;
  • Organizing access to files over a local network and via the Internet;
  • Organizing a media server for watching movies;
  • Organization of video surveillance.

As you can see, the tasks are very diverse, and there are even more opportunities for their implementation. And this is far from full list tasks that can be assigned to the server. Everything is limited only by your imagination and knowledge, and your knowledge is limited only by your desire. 😉

I plan to spend about 6,000 rubles on my idea. We'll see what comes of it, but you have to admit, this is quite an affordable amount for such an extensive list of possibilities. The most important thing is that we get an excellent opportunity to study in detail network technologies and programs. Whatever one may say, a computer scientist must always keep up with the times... Let's study together!

On this moment I have: a wi-fi router for distributing the Internet, a computer (the Internet is connected via a wi-fi adapter) and a laptop. Now a home server will be added to this network.

The network diagram should look something like this:

Choosing a home server

Having searched the Internet for a suitable ready-made option, I realized that in such a limited budget you can only count on self-assembly. All ready-made platforms are either more expensive or too limited in performance and functionality.

For example, you can use ready-made platforms for assembling PCs. They are a board with a built-in processor, which only require the installation of RAM and hard drive. This is a very good option if you want to have a super compact PC in a beautiful package. In my opinion, the performance of such systems at their price leaves much to be desired.

Yes, by the way, one of good options Organizing a home server can be done by purchasing a ready-made NAS (network-attached storage). NAS (Network Attached Storage) are ready-made devices (essentially a computer) for connecting to a network, containing one or more hard drives. They contain a built-in web interface and a huge number of settings. They have built-in applications for organizing photo galleries, mail servers, media servers, torrent clients, etc. All this is already ready, as they say, “out of the box.” You just need to connect the power, log into the device via the network and do necessary settings. Another advantage is silence and low power consumption.

NAS is a great option for those for whom the built-in capabilities are enough.

I decided not to use ready-made solutions, but to build a mini-itx computer. This way we will get greater productivity, system flexibility, and +10 to the “computer” skill. Naturally, the downside is that you will need to configure everything yourself. Although... this is not such a minus.

Selection of components

The motherboard was chosen as the platform for the future server GIGABYTE GA-J1800N-D2H mini-ITX format. This board already has a built-in dual-core processor Intel Celeron J1800. This is not the most powerful processor, but it will be quite enough for a home server.

The undeniable advantage of the processor is low power consumption and low heat dissipation, which means it heats up little, and it is enough to cool it passive system cooling. The absence of fans makes this PC virtually silent.

The board has built-in connectors for a mouse and keyboard, VGA and HDMI video connectors for connecting a monitor or TV, 4 USB +1 USB 3.0 connectors, a gigabit network interface and audio inputs/outputs. In addition, the board has a PCI-E x1 connector for connecting expansion cards.

One of key points The choice was its cost - approximately 2300 rubles. For this money we get a quiet and versatile board with an integrated processor.

The motherboard has slots for SO-DIMM RAM with reduced energy consumption, so I chose CRUCIAL CT25664BF1339 DDR3L - 2 GB as the RAM modules.

The deciding factor was its price of 850 rubles.

The case for the new PC is also in Mini-ITX format. Chose from the most simple options up to 2000 rub. I settled on the FORMULA FW-107D body.

The case already has a 60 W power supply installed, which is quite enough for the selected motherboard.

As a hard drive, I will initially use the 2.5″ 320 GB HDD I already have from an external HDD. Everything to save your budget. If in the future for some reason I am not satisfied with it, I will replace it with another one, but for settings and first experiments it is quite enough.

Actually, these are all the components needed for assembly. You can go shopping and start implementing the idea, but I will talk about this in the next part. Building a home server. Part 2.

Write in the comments if you would be interested in following this experiment? Write additions or ask questions, and I will try to answer them. Bye!

Why, in fact, do you need a subject? What can he give?

If you just want to host a home page or blog, exchange files with friends via ftp, get an e-mail on your domain, probably
It will be easier for you to use the services virtual hosting, as a last resort, order a virtual dedicated server.

Also, it’s more convenient to conduct various experiments at home, since the server is nearby, and if you kill something, you don’t need to wait for a response from the hoster’s technical support, but you can immediately pick up the monitor and fix what’s broken.

Let me make a reservation right away that we are talking about a server that is intended primarily for access from an external network.
A server that distributes content/provides web (irc, etc.) services to several entrance cars has completely different requirements :)

Not everyone has the financial opportunity to install their own server in a Data Center, but with the penetration of broadband access technologies into the World Wide Web, it becomes possible to use the server for the web and other tasks right from home.
Now this is most likely available to every individual resident of Moscow and St. Petersburg, but I am sure that despite any crises, home channel, suitable for its characteristics, will very soon appear in every major Russian city.

What channel do you need?

First of all, it must be:

1. Wide, symmetrical and unlimited.
2. The provider must issue static IPs
3. There should be no filtering of ports, both incoming to the server and outgoing to the outside world.
4. It is advisable that there are no VPNs.
5. If the provider is ready to correct your DNS-PTR record, then this will immediately eliminate a number of problems with mail outgoing from your server.

Now in more detail on each point.

1.a Symmetry

Most home Internet channels have one feature that has almost no meaning for a user who mainly surfs: their width (also, they say, speed) varies depending on the different directions. Typical home user downloads more from the Internet than “uploads” something to some resource.

For example, such Moscow providers as Akado and Stream are ready to provide a very wide downband (“down”, for downloading), but the width of the up (“up”, for downloading) bandwidth, even on the most expensive tariff plans, differs slightly from the cheapest ones.

Unbalanced channel is terrible for home public server because the vast majority of server traffic will go up, and no matter how expensive and cool your Internet channel is, it will, in fact, be idle. Nobody will notice your tens of megabits down, but everyone will pay attention to the brakes when downloading files from your server.
For the server, one might say, the importance of down and up changes places.

Therefore, for example, channels in Data Centers and tariffs for hosting servers are designed for the fact that up traffic will be several times greater than down traffic. In Russia, most Data Centers even charge money for violations of the traffic ratio (usually incoming:outgoing 1:4, and often there is a limit on the ratio of foreign traffic).

I have never heard that an Internet provider can somehow punish for violating the ratio (that is, for the fact that you will send more traffic than download), on the contrary, I think the situation will change in the direction we need.

Many home users share files using p2p networks, and in these networks, in order to download well, you need to have a high rating, which can only be obtained by distributing content. Already now, almost all “advanced” users will go to torrents rather than to the store to find a new mp3 album of their favorite band.
Providers understand trends and adapt. Although, of course, providers can be limited by various factors: the characteristics of their channel, last mile technology (ADSL operators simply do not have technical feasibility make the channel symmetrical), etc.

1.b Unlimited

Regarding the unlimited channel, it is obvious that if the traffic is paid, an unexpected sharp increase in traffic can be very unpleasant for your wallet.
If you provide some kind of public service, it is unlikely that you will be able to control the visitors of your service and their traffic, for which you will pay.
Even if according to your tariff plan, as is usually the case with limit tariffs, outgoing traffic free, even if you prohibit users of the service from downloading files, there will always be incoming traffic, no matter what tricks you resort to. It will be approximately a seventh to a fifth of that outgoing to the web server.

1.in Width

.
Do you have a cheap unlimited symmetrical channel? Great! What? Is it 128kb/s wide? Believe me, in this case, no one will need your service.

Your entire bandwidth will be clogged by one user of a service with broadband access, who will be very annoyed that “the card is slow.” Who will leave and never return.
I'm sorry to disappoint you, but it's probably not worth organizing a home server if your up is only 128kbs.
I would bet minimum value at 512kbs.

But 20 megabits from an Ethernet provider would be the best solution. In this case, visitors would hardly notice the difference in the speed of the service hosted at home and the service hosted in the Data Center.
There are no such channels in the regions yet, but in Moscow they are already connecting with a larger bandwidth.

Separately, I would like to say about network connectivity.
On the World Wide Web, sometimes so-called “black holes” appear in which traffic disappears. Unfortunately, hosting providers who provide space for a server in a DC usually fight for connectivity more actively than Internet providers.

It is worth immediately keeping in mind that if your site cannot be viewed by visitors from, for example, Kamchatka, technical support The provider may not even accept your application for connectivity for consideration, since you are using the channel for other purposes than its intended purpose.

2. Static IP address

Some providers, such as Stream, simply do not issue home users a static IP address.
If the server’s IP address constantly changes, users of your services simply will not be able to access them.
You can try to get out using DynDNS services and even host the site like this:

host h.shaggy-cat.ru
h.shaggy-cat.ru is an alias for shaggy-cat.dyndns.org.
shaggy-cat.dyndns.org has address 91.77.252.108

Here the third level domain is a CNAME record referencing the host free service dynamic DNS.
Your server connects from time to time using special program to the DynDNS service, which updates the A-record value for your domain.

The dyndns.org service supports free of charge only third-level domains, subdomains of domains belonging to the service.
If you are willing to pay, the service can provide you with support for your domain.

When I didn’t have a channel with a static IP, I did it simpler :) I just created a DNS CNAME record.

You can read about setting up DynDNS in Redhat-like systems.
Typically, ADSL modems and cheap home hardware routers can use dyndns.

However, believe me, you can avoid a huge number of problems, sleepless nights, and mistakes that you will step on if you simply connect a static IP address.
By any means: by paying for it as an option on your tariff plan, by getting drunk on your provider's admin panel, and ultimately by sleeping with him if you're a girl :)))
All the effort you make to get a static IP will pay off.
My personal experience only testifies to the constant glitches of DynDNS services :(

I got a static IP easier: I went to another Internet provider. I left the stream as a backup link for now, switching to the cheapest tariff plan.

3. Port filtering

Some Internet providers, tired of complaints from infected clients, simply disable incoming ports for their clients, through which network worms and attackers can damage their client’s computer.
Often the list of ports also includes ports through which Windows systems are usually not Trojaned. For example, Stream blocks incoming ports: 80 (goodbye web server!), 21 (goodbye ftp!), 25 (goodbye mail mx server!)

It is clear that users are unlikely to appreciate the beauty of your site’s URL if they have to access it in some way:

http://pupkin.ru:8888

As for filtering outgoing ports, port 25 filtering is usually used so that a massively infected Win user does not spam.
This may cause some inconvenience if, for example, you want users of your services to receive notifications by mail.
In this case, you can try setting local SMTP server relay mail via another SMTP using, for example, instructions: ttyts.
It is not at all necessary to use the provider's SMTP; you can use your regular free mailbox.

4. VPN and *nix systems.

How much pain, suffering, despair and complete disappointment lies behind this phrase!!
Setting up a VPN to be stable enough for the server was and still is a huge challenge for a newbie.
Even with optimal setting You should take into account that your VPN connection will drop from time to time, and you need to write scripts that will detect the drop in the channel and distort the connection.

Even if you are going to use the server under Windows, you will face the same problems with channel stability, if not greater ones, related to the features of the network subsystem device
this operating system (I haven’t used such a server myself, but one very good person told me terrible things)

It is possible to simplify the setup by using a hardware router. But a hardware router will not add a single drop of stability to the same buggy poptop.

In Moscow, Corbina Telecom is switching to an L2TP VPN connection, they say that it is much more stable.

If you can, connect to a channel that uses authentication based on the network card's MAC address.
In Moscow, these are, for example, Su-29 Telecom, Qwerty, Akado.
However, VPN is not as bad as it is bad dynamic IP address. If a VPN is unavoidable, be aware that poptop is probably the most unstable.

5. Port filtering

If the provider does not filter outgoing ports, then mail sent directly from your server is highly likely to end up in the spam folder of the destination mailbox.
This is due to the fact that almost all spam in the modern world is sent from damaged computers of home users under Windows control. IP addresses issued to such users usually have a characteristic DNS PTR record of the form:

host 91.77.252.108
108.252.77.91.in-addr.arpa domain name pointer ppp91-77-252-108.pppoe.mtu-net.ru.

It is quite simple to write a regular expression (you can search for examples on Google) that distinguishes such hosts from “legitimate” SMTP relays.

If your provider corrects the PTR record for your IP, mail from your server will no longer be filtered by this criterion.

7. Server hardware

Here I find it difficult to give any detailed and professional advice, because I simply have a very poor understanding of hardware, which has never been interesting to me.

The more cores a processor has, the better. In general, the faster it is, the more correct it is. Just don't overclock the system to cosmic speeds, and tune hundreds of coolers, you’ll also have to sleep at night next to this monster...
If you use virtualization technologies such as Xen, KVM, VmWare, pay attention to processors that support Intel VT or AMD Pacifica technologies

The more RAM, the better. Especially when using Virtualization.

You don't need a fancy video card or a sound system in general. It's better to take motherboard with integrated video to occupy a free PCI slot for, for example, a network card.

UPS. Very, very desirable. Just as one good person suggested to me, it’s worth activating the power-on option in the BIOS.

- “Rack”, put the server somewhere far away, so that the buzzing does not interfere, you do not accidentally fill it with liquid, drop it, and so on.
Better on the mezzanine or in the pantry. Just keep in mind that: a) dust is bad; b) In the summer, due to global warming;) in a small, stuffy kennel, it can simply overheat and turn off :(
However, problems with overheating are also possible in data centers. .masterhost was almost awarded the Runet Anti-Prime for the original technology of cooling servers with dry ice :)))

8. Software

It is most important. Something without which there will be no server as such. You can manage to organize a popular service on a dynamic IP with port filtering and an old, old Pentium-2, if you have an idea and a specific software implementation.
On the contrary, you can waste time and money and end up with a dead piece of hardware.

I use OpenVZ industrial virtualization technology on my HomeServer. You can read about it and

I use it because it’s easy to experiment with ( new container with almost any Linux distribution it is created in two to three seconds), there are no losses on virtualization, and because I consider the technology to be very progressive.

However, OpenVZ is beyond the scope of this post, I hope someday I’ll get together and talk about it and its use, just like now, on my fingers.

I recommend using some kind of virtualization technology, since you may want to host many different services on your server, each with its own requirements for the software environment (sometimes incompatible with the requirements of another service that you would also like to use) provided resources, with different attitudes of developers towards the safety of their product.

Using virtualization on a home server, you will get the same consolidation, and the house, in the end, will not turn into a branch of the data center.
One or maximum two powerful computers will be enough for you to solve any problems on virtual machines hosted on the server.

I advise you to pay attention to the following technologies:

a) Xen
b) KVM
c) OpenVZ
d) _server_ options VmWare

A detailed description of these technologies is also beyond the scope of the article; I will only express general considerations.

The most fashionable solution for virtualization now is VmWare. Its popularity is due to its ease of setup and administration.
However, VmWare is not without its drawbacks. The main one is that the most powerful version of VMware ESX Server costs money (free VMware Server looks very dull next to Xen or OpenVZ), also, I would note not the best hardware support, and large performance losses during virtualization.
I think VmWare can be called a pop solution, if you want “everything at once” and are willing to put up with some inflexibility of the solution, there is probably nothing better than VmWare for you.

KVM is perhaps the most promising technology of all these, given the attention RedHat is paying to it and the dynamics of RedHat itself. However, now the technology clearly remains behind its competitors in terms of the number of features.

Xen Very interesting and powerful technology. There is a site with a large number of Russian articles about Xen. When I chose software platform, hesitated for a long time between Xen and OpenVZ. I chose, as I wrote above, OpenVZ

OpenVZ The main advantages are that the technology, as I wrote above, works with virtually no performance loss, that there are dozens of VPS templates with a variety of software and different distributions that can be deployed in moments.
Very convenient for experiments :)
The main disadvantage is that they are virtualized only Linux distributions

This paragraph does not reflect even a fraction of what OpenVZ has become for me. I really, really hope that I will get together and write an article on using this system.

If you plan to use several virtual servers, and there is only one external IP address (this will most likely be the case with a home server), you can give virtual machines IP addresses from ranges intended for local networks, and forward ports from the external IP to virtual machines (Iptables DNAT in Linux).

If you are planning more than one web server, then port forwarding will not help. I got out of it using the nginx accelerating http reverse proxy on a separate VPS.
This nginx proxies http connections to one or another VPS. Maybe I'll tell you about this someday :))

ZY Reprinting is permitted only with a link to the original of this note.

Successful setup of your home server, and... bother with computers less, walk more often, go to a museum/theater/cinema/visit/travel!

I assembled my first home server in 2008: Celeron E1400 on the ASUS mATX platform and all this in an excellent Antec NSK 1380 case. The case is really good except for two points: 1. Non-standard format of the power supply (and as a result the ability to install only low-profile cooling on processor) 2. Small number seats under drives and their poor cooling (that's why I never put more than one disk there - and it was so cramped and hot).

This machine coped with the role of a router perfectly. But organizing a file dump on it already created inconvenience: the space always runs out -> you have to change the disk to a new larger one (well, don’t really clean it!) -> for this you need to transfer it to new disk system -> if you are moving it, shouldn’t you update it at the same time, otherwise you have to look for packages with new time zones for the current one almost like dogs (fervent greetings to Fedora) -> ... And so every time.

I wanted to collect new server, which would allow you to organize a RAID or at least simply install several disks to solve the space problem radically and for a long time. And also raise several virtual machines for production needs. And also...

But the most important argument is, of course, the desire to touch new pieces of hardware! So I decided on the requirements and went to the store to Google.

Requirements:

  • noiselessness
  • compactness
  • possibility of convenient installation/replacement of disks and a sufficient number of seats (from 4)
  • versatility (more connectors/interfaces, all sorts of different ones, you never know what you want to screw on)

The Mini-ITX form factor was not a mandatory criterion, but it logically followed from the second point. Therefore, I decided for myself that I would try to get the maximum out of it and only as a last resort would I start looking towards mATX.

Disclaimer

Hardware selection

1. Body

The first thing I did was look for the case. There are now a great many of them for Mini-ITX, but most are intended for inexpensive nettops.

The options suitable for a home server/NAS can be counted on one hand:

Fractal Design Array R2 CFI-A7879
Chenbro ES34069 Chenbro SR30169

and a couple of others.

Moreover, most of them are difficult or impossible to buy in Russia. In the end I chose Chenbro SR30169. Its main advantages: convenient installation of four 3.5″ drives (with HotSwap support), thoughtful cooling using 120mm fans, standard block power supply (the vast majority of other cases use Flex ATX or non-standard form factors), ease of installation.

Video about the internal structure:

2. Motherboard

Criteria:
modern platform with support for Ivy Bridge processors, 2 built-in network cards, PCI-E connector (for WiFi installations), at least four SATA connectors(but ideally at least five - 4 for the raid + 1 for the system), miniPCI-E just in case, a sufficient number of USB ports (preferably 3.0), several video interfaces (I didn’t know which interface I would have to connect to, so At a minimum I wanted HDMI and D-Sub)

Intel® Server Board S1200KP Intel® Server Board S1200KPR
ZOTAC Z77ITX-A-E Jetway NF9E-Q77

The first two are quite specific. On the one hand, they support Xeon processors and ECC memory, on the other hand, expansion options are very limited: only four USB (and only 2.0), only four SATA, no built-in audio, one video output, only one expansion slot. Of course, these points are irrelevant for the organization’s server, but for home use I want more flexibility. In addition, the board with the KP index does not support 22nm processors, and KPR was not on sale at the time the machine was assembled (July-August 2012).
Also, the ZOTAC Z77ITX-A-E was not on sale, although the board is certainly very interesting. WiFi module included, two gigabit network cards- beauty!
Several more boards were being prepared for release at that time; I don’t know whether they came out or not, so I’m not writing about them in detail here.

The other day, a very timely user track came out about server memory failures. I strongly recommend that you familiarize yourself with it before using the configuration I suggested for critical tasks.

Ultimately, I settled on the Jetway NF9E-Q77 board. It's amazing how much Jetway managed to fit into a Mini-ITX board! Support for 3rd generation Intel processors (LGA1155), 6 SATA ports (2xSATA3 + 4xSATA2), 2xUSB3.0 + 4xUSB2.0 (+ a pair of ports of each type with connectors on the board), PCI-E + miniPCI-E, 2 Gigabit Intel network adapters , 3 video outputs (HDMI, DVI-D, D-Sub) not including LVDS. There are also two RS232 ports, RS422/485 from the connector on the board, GPIO, Watchdog, support for iAMT, vPro, etc.
The type of memory used is DDR3 SODIMM.

I was unable to find this board for sale in Russia, but fortunately it was found in the German store minipc.de. Delivered by courier service. Minus VAT and shipping costs it came out to exactly $200. The price for such a board is, in my opinion, more than reasonable. By the way, the board is manufactured according to standards that include industrial use, which means increased survivability (according to information from the Jetway website, the author of the article does not give guarantees =)).

3. Hard drives

For the last 10-15 years I have been using only IBM/Hitachi products. Therefore, I chose a model with the maximum volume at an adequate price (at the time of assembly it was HITACHI Deskstar 7K3000 HDS723020BLA642, 2TB) and purchased two pieces with the idea of ​​​​buying two more when I decided on the software (I had doubts that this would happen quickly - that's how it happened). Since in the selected case, in addition to four HotSwap drives, only 2.5” drives can be installed as standard, I decided to install a drive from a laptop there, which I planned to replace with an SSD anyway.
This winter we purchased two additional HITACHI Deskstar 5K3000 HDS5C3020ALA632 drives.

4. Power supply

I simply chose the power supply unit as the least powerful (and therefore cheapest) of the decent and quiet ones that were available at the nearest hypermarket.
This was AeroCool VP-450.
Of course, in such a compact case it would be better to take a power supply with removable cables, but they cost much more, and there were reviews that in this case the cable connectors could begin to conflict with the processor cooling.

5. Processor

What I needed from a processor was more cores, less heat, and a reasonable price. Although no, we still needed a built-in video chip. I chose Intel Core i5 3550.

6. CPU cooling
I wanted silence here, good cooling and at the same time not to make a mistake with the dimensions. The one that was suitable in the nearest stores was Arctic Cooling Alpine 11 Plus.

Well, the hardware has been purchased, let's start assembling!

Assembly

The author still remembers the times when marking connectors/switches on the motherboard was considered bad manners, instructions were written for underpants and therefore bears were not brought into the country, and by turning the processor the wrong way around you could quickly and very expensively get a cool keychain for your mobile phone. . Not to mention the ritual of sprinkling each assembled car with their own blood, for which the caring Chinese always left the edges of the crop sharp in case the collector forgot to take a special sacrificial knife or, due to inexperience, does not know about such a need. Unfortunately, modern manufacturers, in pursuit of profit, do not care about traditions or caring about the assembler’s leisure time. Those who hoped to shed a stingy tear of nostalgia over the article will only be disappointed by further material.

The body is made of 0.8mm thick SGCC steel and gives the impression of being solid, there are no gaps or backlashes, all edges are neatly flared. The side walls are secured with knurled screws. Most of the rear wall is occupied by the base for mounting the power supply. The remaining space is given over to the ventilation grille and the motherboard connector panel. There is a retractable eyelet that allows you to close the case with a barn lock (though only on one side, which is in this case has little benefit) or put him on a leash. There is a slot for a Kensington lock.

Inside, the case space is divided into two parts: the back half is intended for the motherboard and power supply, while the front half is almost entirely occupied by the hard drive cage and its cooling.

The basket supports hardware RAID controllers and is designed for 4 3.5”/2.5” SATA/SAS drives with hot-swappable capability. For convenience, disks are installed from the front side of the case.

Each disc is secured with screws in the tray, which is then inserted into the basket. When turned on, each tray glows blue. It may seem that they have LEDs, but the solution is much more elegant - the indication is located on the back wall of the basket, and is brought to the front panel using optical waveguides!

To limit access to the front panel of the case, there is a flimsy plastic door with a lock. In my opinion, it could be metal, but I’m ready to forgive the manufacturer for this nuance =)

The motherboard truly amazed me! Unfortunately, the photographs do not convey the feeling of a well-made product that you feel when you hold this board in your hands. How was the manufacturer able to fit so much into this tiny thing, while not forgetting to comply with all standards for the location of components? In order not to repeat myself, I will not list all its capabilities again; those interested can refer to the first part of the article or to the specification on the Jetway website.

Despite the fact that when cooling the processor I had to give up my favorite 120mm format and agree to the compromise 92mm, I still had doubts that a sufficiently large cooler would install without incident.

AC Alpine 11 Plus is installed on plastic strips pre-attached to the board. And although these strips fit flush with the surrounding components, the only thing I had to do additionally was to remove the plastic retainer from the PCI-E x16 connector and slightly bend the tail of the connector.

To install the power supply, you need to remove the special basket at the top of the case. Then it is attached to the power supply, a small extension cord is connected to the power connector, after which the assembly is installed back into the case. Thus, the manufacturer saves us from the power cord sticking out of top cover housings.

The power supply has a whole bunch of connectors, half of which we won’t need.

All we can do is, according to the good Russian tradition, to remove them to the mezzanine. To route the cables, a clamp on double-sided tape was removed from the bins.

Now you can install the motherboard. The processor cooling was included under the power supply with a significant margin of several millimeters. While installing the board, I had to remove the air duct of the disk cage, but this is done easily.

The photo with the air duct installed back shows that the fins of the processor radiator were located exactly across the direction of air movement from the basket. And to be sure, they are separated from the air duct by memory modules. Unfortunately, this model CPU cooling does not allow it to be rotated 90 degrees.

No other nuances were found during the assembly of the machine. In particular, when I later decided to add another memory stick, I was easily able to do so without any problems. Thus, the only critical point in the assembly process is the cooling height of the processor and the PSU cable routing.

Cooling efficiency and noise

At the moment, I have four 2TB Hitachi drives in my cart, their temperature does not exceed 37 degrees (34 when idle). The temperature of a 2.5" system disk is usually 31-33 degrees. Processor idle - 40 degrees.

The noise from the system mainly consists of air rustling. But to do this I had to install a speed controller on the basket fan. At normal speeds it cannot be called quiet (although the manufacturer claims the opposite in the advertising brochure). In the future I plan to replace it with something more silent.

Conclusion

When I assembled the server, I did not yet know what software to install on it and planned to include a discussion of this issue in an article. Despite the fact that in the end (may *nix fans forgive me) I settled on Windows Server 2012, for many this question remains open, comments on this topic are welcome.

P.S. The price of the system without hard drives turned out to be around 22 kilo rubles.

UPD: I want to draw your attention to the fact that I need something more than just a NAS. The same machine should be both a test environment and a development environment. Naturally, it would be better to split these roles into different machines, but my apartment is not that big. That is why hardware was chosen that is redundant for a regular NAS and that is why devices like Synology are not suitable.







2024 gtavrl.ru.