ClioSport.net

Register a free account today to become a member!
Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • When you purchase through links on our site, we may earn an affiliate commission. Read more here.

How many hosts per esx server....



  Not a 320d
What would you say, preferably from experience, is the "right amount" of virtual desktop clients for an esx server to run.

Resources resources etc etc, dont have any specs - Is there a general rule of thumb?

Ive heard that for virtual app servers its no more than 5 per core.

Got 100 thin client machines to cater for preferably with room for expansion, its not a company with unlimited budgets.

*Defo not asking people to do my assignment work here :eek:
 

dk

  911 GTS Cab
It is about 5-10 desktops per core, depends on the profiles of the users, there are vdi calculators on the web to help.

I have a customer who is running about 375 users and they have 7 hosts, each with 2x6 core processors. The cpu is not really the worry though, it's the memory, those servers have 96gb ram in them and they are at their limit now. We have out 192gb in our servers for vdi, that's 6 servers for 400 users, again 2x6 core processors.

Also for the storage, you need a fair amount of spindles, or if using a netapp adding something like flash cache, or if the array supports it ssd drives. There are dedicated I'd storage boxes out there too.

Its definitely something you need to spec correctly though, and the main things are memory and storage spindles/IO.

with 100 users, I'd probably say you'd be looking at 3 hosts (that's the minimum you should ever have in a cluster with VMware, presume your re talking VMware view here?), and I would say 2 of the latest processors, and I would put 128gb in each. You need to be able to run all 100 users off 2 servers for when you perform maintenance on the hosts or have a server failure.

If you don't have the storage performance, there is another option and that's using fusion io cards, they are flash memory for the individual servers, as it's not shared its only really for non-persistent desktops and you wouldn't be able to bring a host down for maintenance without any downtime to a few users while they log into a new desktop on one of the other hosts.

With view, there are to types of license too, standard and premium, the premium includes things like linked clones to save on storage space etc. those licenses also come with enterprise plus licenses for the hosts and a vcenter license too (but only for managing desktop hosting servers).

If you need more help, I would suggest calling my company and speaking to someone to arrange a chat with an expert.

www.softcat.com
 

dk

  911 GTS Cab
Doesn't make that much difference, windows 7 needs more resources than xp, but that's normally because people want things like aero to work on the desktops.

You should be going windows 7 now anyway, xp is definitely not recommended for a new deployment.
 
Doesn't make that much difference, windows 7 needs more resources than xp, but that's normally because people want things like aero to work on the desktops.

You should be going windows 7 now anyway, xp is definitely not recommended for a new deployment.

It's uni coursework...Windows isn't the only OS out there....
 

dk

  911 GTS Cab
But that does raise a good point people forget about when doing vdi, you need to buy new full licenses for the desktop os, you cant use the existing licenses from the desktops as they will be oem, so you either need to by full licenses or go on a yearly scheme with ms like an enterprise subscription agreement.
 

dk

  911 GTS Cab
It's uni coursework...Windows isn't the only OS out there....
Oh I thought it was real life, ffs, doesn't anyone do their own work anymore!!!

and yes, it will be windows, how many people do you know running anything other than windows on vdi, I've not come across any and I've done a lot of deployments. Yes, you will get the odd user needing a Linux box, or mac os, but very rarely, vdi is very much a windows thing.
 
  Not a 320d
It is about 5-10 desktops per core, depends on the profiles of the users, there are vdi calculators on the web to help.

I have a customer who is running about 375 users and they have 7 hosts, each with 2x6 core processors. The cpu is not really the worry though, it's the memory, those servers have 96gb ram in them and they are at their limit now. We have out 192gb in our servers for vdi, that's 6 servers for 400 users, again 2x6 core processors.

Also for the storage, you need a fair amount of spindles, or if using a netapp adding something like flash cache, or if the array supports it ssd drives. There are dedicated I'd storage boxes out there too.

Its definitely something you need to spec correctly though, and the main things are memory and storage spindles/IO.

with 100 users, I'd probably say you'd be looking at 3 hosts (that's the minimum you should ever have in a cluster with VMware, presume your re talking VMware view here?), and I would say 2 of the latest processors, and I would put 128gb in each. You need to be able to run all 100 users off 2 servers for when you perform maintenance on the hosts or have a server failure.

If you don't have the storage performance, there is another option and that's using fusion io cards, they are flash memory for the individual servers, as it's not shared its only really for non-persistent desktops and you wouldn't be able to bring a host down for maintenance without any downtime to a few users while they log into a new desktop on one of the other hosts.

With view, there are to types of license too, standard and premium, the premium includes things like linked clones to save on storage space etc. those licenses also come with enterprise plus licenses for the hosts and a vcenter license too (but only for managing desktop hosting servers).

If you need more help, I would suggest calling my company and speaking to someone to arrange a chat with an expert.

www.softcat.com

Yeah View mate, I had it down for two servers with 50 users on each. Some people seem to think 60 hosts is the norm. I had NO idea that ram requirements were that high, im shocked.

Were actually implementing vdi in our uni lab (with fat clients) and weve bought a server, for 15 users weve got 4 cores and 32gb of ram.

Ive also got to stick redundancy in there somewhere, along with a hybrid cloud/nas solution, and 2 servers (for failover) for the applications.

Great info again! Many thanks.

Probs best I dont speak with someone as ill overcomplicate things and ive got less than two weeks left now.
 

dk

  911 GTS Cab
Yeah, and that's before I realised you weren't actually going to order the kit after ;)

2 server is fine, but if one has a failure, no matter how small, say a processor fails etc, you need to have another host to spread the load, so 3 is the minimum you should be looking at.

For ram, you are looking at about 2-3gb per desktop for windows 7, plus overhead, and the esxi os ram etc, plus having enough in 2 servers to run the desktops when a server fails.

Say you had 100 users and needed 3 go per user, that's 300gb minimum, over 3 servers that's 100gb per server, but you need to be ale to run all 100 users on 2 servers. So you actually need 150gb per server, nd ith overheads etc you would round that up to at least 164 gb, but probably more likely to be 192gb, as you need to provision for extra users, as the number of users never goes down but invariably goes up, and then also, you don't want to runt he server at 100% ram usage, for starters an alarm will show in vcenter if the host is utilising, say 85% of its ram etc. You will get ram savings, from sharing etc, but you always need to take the worse case scenario as you don't want to be running out of ram for users, as it will affect many more than one user unlike if you underspecced a single users physical machine under their desk.
 
  Not a 320d
surely if failovers in place with a mirrored setup then four servers (two each side of the logical topology for example) would be OK?
 

dk

  911 GTS Cab
surely if failovers in place with a mirrored setup then four servers (two each side of the logical topology for example) would be OK?

Not sure what you mean by this?

Mirrored setup, 4 servers?

Its a single site right?

If its 2 sites,mthen you wouldn't mirror the data between the sites, and virtual desktops are spun up from clones you would just deploy a new pool at the dr site. The servers they connect to (which you've not mentioned yet) will Nero be failed over etc.

If you have 2 sites, then you would have 3 servers on the main site and 2 at the dr. At the primary site, you wouldn't want to failover the sites just for a single component failure in a server, but if you didn't have 3 servers, this is what would have to happen as you wouldn't have enough resources to run all the desktops. At dr you can justify just using 2 servers, as it's only for emergencies and you shouldn't really be over there for long then you can survive with 2. If it's looking like you might be there longer then after a dr failover you could provision a new server to again provide resiliency.

Hope that makes sense.
 
  Not a 320d
At the moment it is. Just thinking of a standardised approach.

For just the VDI solution I was thinking two servers to cater for the hosts, then to implement redundancy/resilliency in case one of the servers, or the NAS storage, or network hardware etc goes kaput then it can fail over onto a secondary mirror.

This is an edit of one of my diagrams, just one server for the desktops and another for the storage - this isnt my final diagram. This is on one office site btw. My old design was this one, mirrored again to provide DR at a second site, it was for an old assignment.

diag.jpg


I see what you mean about using the third server. Kind of like planning for redundant throughput in a way with networks, only this time with a server. Two servers can cater for the lot, but the third provides a bit of resilliency and extra performance as they are evenly distributed between the three. One goes down and the remaining two can cope for a while until the third is put back in place. Cheaper than having 4 servers mirrored......
 
Last edited:

dk

  911 GTS Cab
hmm, didn't realise you were using that s**tty san wannabe software rather than a real san, how many spindles does the storage server have (or do you not need that detail)? for vdi, it will need more than you probably think.

I don't really know how that software works, i deal with hardware SANs (which admittedly are running software on them, but they are proper sans from the main vendors, not a build your own type thing). So the data core sits on windows 2008? And the 2 data core servers (acting as the SAN) mirror between themselves for resilience right? then the ESXi boxes, you have 2 of those, although i wouldn't draw it and set it up like this personally, the 2 storage servers go together and the 2 esx hosts go together, then you have 2 iscsi switches i'm guessing, again, all 4 hosts would connect to both switches, not how you have set them up. You'd also probably use stacking switches too, like cisco 3750's. Your cloud where it says virtual desktop infrastructure, thats actually the esxi servers, think you mean the thin clients, which are not the VDI part really. Theres also no need for a line down the middle of the image, thats only if you have 2 sites, same site means everything is logically in one site, you wouldn't use a line to signify a mirror really, and by having the esxi server in the secondary (mirror) circle you are insinuating that the machines are replicated (mirrored) from the primary, but they aren't, each host would have different users on. Thats how i'd change it (i see you say this from an old assignment, they don't really sound very similar so i would start from scratch on the diagram)
 
  Not a 320d
This was from last year before I learned :) Dont need that detail. The assignment really focusses on virtualisation and its concepts so not much need to tech detail.

I believe datacore is a hosted setup, although I think it can be bare metal. I just wanted to have server running for some other features.

Cheers for the input ill sort out what youve told me to do.

Also, when an esx server goes down, how do the clients which were installed on that server 'failover' to one of the remaining two ?
 
  Rav4
hmm, didn't realise you were using that s**tty san wannabe software rather than a real san, how many spindles does the storage server have (or do you not need that detail)? for vdi, it will need more than you probably think.

I don't really know how that software works, i deal with hardware SANs (which admittedly are running software on them, but they are proper sans from the main vendors, not a build your own type thing). So the data core sits on windows 2008? And the 2 data core servers (acting as the SAN) mirror between themselves for resilience right? then the ESXi boxes, you have 2 of those, although i wouldn't draw it and set it up like this personally, the 2 storage servers go together and the 2 esx hosts go together, then you have 2 iscsi switches i'm guessing, again, all 4 hosts would connect to both switches, not how you have set them up. You'd also probably use stacking switches too, like cisco 3750's. Your cloud where it says virtual desktop infrastructure, thats actually the esxi servers, think you mean the thin clients, which are not the VDI part really. Theres also no need for a line down the middle of the image, thats only if you have 2 sites, same site means everything is logically in one site, you wouldn't use a line to signify a mirror really, and by having the esxi server in the secondary (mirror) circle you are insinuating that the machines are replicated (mirrored) from the primary, but they aren't, each host would have different users on. Thats how i'd change it (i see you say this from an old assignment, they don't really sound very similar so i would start from scratch on the diagram)

Don't be such a tart.
 
  Rav4
At the moment it is. Just thinking of a standardised approach.

For just the VDI solution I was thinking two servers to cater for the hosts, then to implement redundancy/resilliency in case one of the servers, or the NAS storage, or network hardware etc goes kaput then it can fail over onto a secondary mirror.

This is an edit of one of my diagrams, just one server for the desktops and another for the storage - this isnt my final diagram. This is on one office site btw. My old design was this one, mirrored again to provide DR at a second site, it was for an old assignment.

diag.jpg


I see what you mean about using the third server. Kind of like planning for redundant throughput in a way with networks, only this time with a server. Two servers can cater for the lot, but the third provides a bit of resilliency and extra performance as they are evenly distributed between the three. One goes down and the remaining two can cope for a while until the third is put back in place. Cheaper than having 4 servers mirrored......

If you're going to have failover nodes i.e dual nodes on the front end, I hope you have redundant switches and redundant NAS. For two nodes, depending on how you have it, I would consider using local storage, it'll be quicker. Less complicated and cheaper.

Then backup from the local storage to a local NAS box, then replicate that NAS box off site.

You can use VEEAM for that.

Just an alternative......
 
  A3 1.8T
You work for Softcat, one of the Girls Emma is my Acc. Manager...She must hate me the amount of stuff I get her to quote me for pony small desktop orders.

It's all about Isilon for NAS :)
 


Top