Need to Register?

Currently you can register through a Gmail, Facebook, or OpenID account. This helps us avoid spam. Click an icon below! :)

Sign In with Facebook Sign In with Google Sign In with OpenID

Comments

  • datacentre servers
  • Even still Kernon, not really. The only place that might be needed is in a lab (staging) env, where you're running a lot of VM's but not a lot of work loads.

    Most servers these days run 16 DIMMs in them, that's 2TB of RAM in one server? I mean I suppose this is an option if you just want to do with out storage? But the price point and power needed to run that much RAM is a little crazy to think when you start looking at 32 server racks.

    But overall, this is a good thing in general, as it should start to drive costs down on 32GB and 64GB low voltage sticks.
  • off the top of my head

    As gif noted lab datacentre environments
    SAN enclosures where more ram would equate to bigger cache
    highly FT vmware environments where mirror VM's are running
    actually given the amount of cores we're cramming into cpu's these days, even normal virtualized environments would benefit from uber ram counts.
  • oh also PCI ram based SSD cards might make sense again.
  • GifGif
    edited November 2015
    highly FT vmware environments where mirror VM's are running
    actually given the amount of cores we're cramming into cpu's these days, even normal virtualized environments would benefit from uber ram counts.


    How many cores you cramming into machines these days? 56 Tops?

    More so what this says to me, is that people run super inefficient code and don't know how to run a fucken service and scale properly. But that's a whole different conversation - so all we're doing here is masking an issue that could be addressed with leaner coder and scaling properly.

    From my experience, and some of you guys can prove me wrong, the largest scale (at peak) I did planning for was 400,000 core system globally, across some 17 zones. This ended up being a fully DR'ed env, which was eventually scaled back to half the size, but in either case, we ran closer to a 1:4 core:RAM ratio on avg, and still managed to get our CPU running over 50% on average once we scaled back.

    So not that we don't need 128gb sticks for certain applications you mentioned, but if you're sticking 2TB of RAM into a server - you should get closer to 256 cores - but then you get into powering and cooling these machines, most DC's I've walked through wouldn't be able to handle that with out a lot of wasted space, so for larger scale purpose , you'll quickly be wasting your money.

    Unless of course (again) you're trying to do with out a lot of storage and trying to stuff everything into Memcached or something. Also though engineers would love to tell you caching everything is great, because you can save on going out to the internet, or to the drives etc, it's not always more cost effective either, as most analysis I've ever done on cache hit ratios quickly found diminishing returns at large scale.

    But I have to digress a bit, in smaller scale environments certain server builds probably work better and save more money then not, so I don't want to say it doesn't work out, it's just that expense is a huge driving force behind my statements, and typically vertically scaling a server is more costly then horizontally scaling, well in fact you should be scaling diagonally along a path that is not break neck new technology, and not old inefficient crap either.

    Now I'm curious if there is white papers on this, hmm.

    This is all in regards to virtual environments and not storage clusters or anything like that - where I have little to no experience in.
  • well in the vmware world (I don't do any hyper-v) we're up to 10 core Xeon's with hyper threading ... so 20 vcpu's per socket to vmware. so in a 4 socket board that's 80 "cores"

    thats only in the standard rack mount server chassis though. taking into consideration how VMM swaps active VM - cpu mappings, and numa i could see a 128gig stick being useful in you were virtualizing some heavily memory intensive stuff.
  • also remember reading somewhere that intel has an 18 core xeon coming at some point also
  • edited December 2015
    More so what this says to me, is that people run super inefficient code and don't know how to run a fucken service and scale properly.
    You imagine that all services that could run in a datacenter are like those in your experience. I need tons of RAM. Not because I need all that data all the time, but because I need more-or-less random bits of it as quickly as possible.

    We have a use case that has us very interested in this kind of thing. Specifically, we have GPUs that need access to a ridiculous amount of geometry and RF data (total dataset in the 10-50 TB range). Our main bottleneck is getting random bits of that data into VRAM on our GPUs, to the point where we're severely bottlenecked on I/O even using PCIe x4 NVMe SSDs (which still have to go through main RAM to get to the cards, unfortunately). Right now, we can't store enough data in RAM on each machine, so a lot of them are sitting idle because the data they do have isn't what we need. Storing more of our dataset in main RAM per machine means we can make better use of our GPUs, reduce datacenter space and power use, and get better response times to the end user.

    I agree if you're running largely text-based services, this amount of RAM might be silly. But anything media/rendering/physics related? We need shit-tons of it. Not because it's all being used all the time, but because we spend most of our time moving stuff around.

    Thinking only of RAM:CPU ratios on the basis of VMs is ignoring a large part of high performance computing. We can easily make use of 2 TB of RAM with only 20 cores. Though, in fairness, if you count the GPU 'cores', I suppose it's a lot more cores than 20.
  • GifGif
    edited December 2015
    *Edit*
    Cool. Ya - My experience is sort of limited in type of services, and if someone is making the effort to put that much RAM on a stick, I'm assuming there is some need. I just think it's primarily in the enterprise segment (IT sides of the house) that sort of density is overkill, and masking poor coding/use of memory. I'll assume for a second you're probably also not running VMWare?

    I'm afraid to ask how, what you explained, will scale as you guys get a lot more data/customers? That sounds super power hungry.
  • It's almost ridiculously power hungry. We're trying to suspend rendering pipeline on top of Hadoop to get some horizontal scaling, using Spark to keep the dataset 'in RAM'. In theory this should let us move out horizontally without it being brittle.

    In terms of ridiculous power requirements? Pretty much, yeah. With that said, I suspect this type of data will be used by a small number of higher-paying customers, as opposed to something the general public would be interested in.
Sign In or Register to comment.

USDN.CA does not support IE8 or less. Things may not look good or work at all. Why aren't you using a modern browser? :)