24 Hour Technical Support & Seattle Computer Repair
support@seattlecomputer.repair (206) 657-6685
We accept insurance coverage!
Computer Manufacturing
Emerald City IT manufactures powerful, quiet, and durable computers for homes and businesses in Seattle, and Washington. Our computers are designed to offer superior peak performance and have a product lifetime guaranteed to exceed other computer stores in Washington State.
- Details
- Tech Support by: Emerald City IT
- Support Field: Computer Repair and Tech Support
- Support Category: Computer Manufacturing
Intel is in talks with specific PC vendors to cut prices for previous generations of Alder Lake processors, including Core i9 processors, by as much as 20%, according to industry sources.
- Details
- Tech Support by: Emerald City IT
- Support Field: Computer Repair and Tech Support
- Support Category: Computer Manufacturing
DigiTimes reported that DRAM spot prices dropped 40% this year, with DDR3 and DDR4 taking significant hits but DDR5 has also dropped quite significantly. However, the 8 GB DDR5 memory modules saw the most prominent hit, dropping as much as 43% in 2022. This information was tabulated between February to October 2022.
DDR5 memory modules continue to decline in 2022, & 2023 to mark a wider adoption of the new memory standard
DDR5 would have seen more adoption in the PC marketplace had it not been for high premiums for the memory. Added with other computer components, consumers were led to needing to find more affordable solutions for their PC rigs. This difficulty felt in the PC marketplace caused board manufacturers to design separate motherboards with identical configurations to be able to offer support for DDR4 and DDR5 memory.
While the decline in pricing now is excellent for consumers, it will hurt manufacturers attempting to break even with the previous year's sales numbers. Currently, on Amazon, you can purchase memory kits from companies like Corsair, Crucial, Kingston, and PNY for between $63 to $130 for 16 GB DDR5 memory kits (8 GB x 2 module kits).
This next year will be great for consumers wanting to adopt DDR5 into more PCs, as the current level of pricing does not appear to fluctuate higher during 2023. Plus, DDR4 kits are slightly less than their DDR5 counterparts, meaning consumers will have to spend less to future-proof their systems for the next few years.
A lot of the declining cost is rumored to DRAM manufacturers having large quantities of overstock and sellers trying to maintain but not overstock their inventories. NAND manufacturers are also seeing a decline in sales by nearly twenty percent over the last quarter on TLC NAND compared to SLC NAND, which kept nominal in price throughout the year.
The most considerable concern for consumers, as well as the manufacturers, boils down to costs on the manufacturing side. Suppose companies cannot sustain the current memory level for sale to consumers. In that case, they will have to slow down manufacturing, which can lead to a complete stop, interrupting the memory flow on the available global marketplace.
News Sources: DigiTimes, Tom's Hardware
- Details
- Tech Support by: Emerald City IT
- Support Field: Computer Repair and Tech Support
- Support Category: Computer Manufacturing
We're all used to dealing with system memory in neat factors of eight. As capacity goes up, it follows a predictable binary scale doubling from 8GB to 16GB to 32GB and so on. But with the introduction of DDR5 and non-binary memory in the datacenter, all of that's changing.
Instead of jumping straight from a 32GB DIMM to a 64GB one, DDR5, for the first time, allows for half steps in memory density. You can now have DIMMs with 24GB, 48GB, 96GB, or more in capacity.
The added flexibility offered by these DIMMs could end up driving down system costs, as customers are no longer forced to buy more memory than they need just to keep their workloads happy.
What the heck is non-binary memory?
Non-binary memory isn't actually all that special. What makes non-binary memory different from standard DDR5 comes down to the chips used to make the DIMMs.
Instead of the 16Gb — that's gigabit — modules found on most DDR5 memory today, non-binary DIMMs use 24Gb DRAM chips. Take 20 of these chips and bake them onto a DIMM, and you're left with 48GB of usable memory after you take into account ECC and metadata storage.
According to Brian Drake, senior business development manager at Micron, you can usually get to around 96GB of memory on a DIMM before you’re usually forced to resort to advanced packaging techniques.
Using through-silicon via (TSV) or dual-die packaging, DRAM memory vendors can achieve much higher densities. Using Samsung's eight-layer TSV process, for example, the chipmaker could achieve densities as high as 24GB per DRAM module for 768GB per DIMM.
To date, all of the major memory vendors, including Samsung, SK-Hynix, and Micron, have announced 24Gb modules for use in non-binary DIMMs.
The cost problem
Arguably the biggest selling point behind non-binary memory comes down to cost and flexibility.
"For a typical datacenter, cost of memory is significant and can be even higher than cost of compute," CCS Insights analyst Wayne Lam told The Register.
As our sister site The Next Platform reported earlier this year, memory can account for as much as 14 percent of a server’s cost. And in the cloud, some industry pundits put that number closer to 50 percent.
"Doubling of DRAM capacity — 32GB to 64GB to 128GB — now produces large steps in cost. The cost per bit is fairly constant, therefore, if you keep doubling, the cost increments becomes prohibitively expensive," Lam explained. "Going from 32GB to 48GB to 64GB and 96GB offers gentler price increments."
- How AMD, Intel, Nvidia are keeping their cores from starving
- Astera Labs says its CXL tech can stick DDR5 into PCIe slots
- Why you should start paying attention to CXL now
- Why Intel killed its Optane memory business
Take this thought experiment as an example:
Say your workload benefits from having 3GB/thread. Using a 96-core AMD Epyc 4-based system with one DIMM per channel, you'd need at least 576GB of memory. However, 32GB DIMMs would leave you 192GB short, while 64GB DIMMs would leave you with just as much in surplus. You could drop down to 10 channels and get closer to your target, but then you're going to take a hit to memory bandwidth and pay extra for the privilege. And this problem only gets worse as you scale up.
In a two-DIMM-per-channel configuration — something we'll note AMD doesn't support on Epyc 4 at launch — you could use mixed capacity DIMMs to narrow in on the ideal memory-to-core ratio, but as Drake points out, this isn't a perfect solution.
"Maybe the system has to down clock that two-DIMM-per-channel solution, so it can't run the maximum data rate. Or maybe there's a performance implication of having uneven ranks in each channel," he said.
By comparison, 48GB DIMMs will almost certainly cost less, while allowing you to hit your ideal memory-to-core ratio without sacrificing on bandwidth. And as we’ve talked about in the past, memory bandwidth matters a lot, as chipmakers continue to push the core counts of their chips ever higher.
The calculus is going to look different depending on your needs, but at the end of the day, non-binary memory offers greater flexibility for balancing cost, capacity, and bandwidth.
And there aren't really any downsides to using non-binary DIMMs, Drake said, adding that, in certain situations, they may actually perform better.
What about CXL?
Of course non-binary memory isn't the only way to get around the memory-core ratio problem.
"Technologies such as non-binary capacities are helpful, but so is the move to CXL memory — shared system memory — and on-chip high-bandwidth memory," Lam said.
With the launch of AMD's Epyc 4 GPUs this fall and Intel’s upcoming Sapphire Rapids processors next month, customers will soon have another option for adding memory capacity and bandwidth to their systems. Samsung and Astera Labs have both shown off memory-expansion modules, and Marvell plans to offer controllers for similar products in the future.
However, they are less an alternative to non-binary memory and more of a complement to them. In fact, Astera Lab's expansion modules should work just fine with 48GB, 96GB, or larger non-binary DIMMs. ®
- Details
- Tech Support by: Emerald City IT
- Support Field: Computer Repair and Tech Support
- Support Category: Computer Manufacturing
Intel’s Core i9-13900K has set a new world record overclock for a desktop processor, breaking the 9GHz mark for the first time ever.
Tom’s Hardware (opens in new tab) spotted that a team of overclocking experts from Asus (opens in new tab) managed to get the Raptor Lake flagship to reach 9.008GHz. Naturally, that was with exotic cooling (liquid nitrogen and liquid helium, in this case) with the CPU nestled in an Asus ROG Maximus Z790 Apex motherboard (and a ROG Thor 1600W Titanium as the power supply).
There was some tension in the record-breaking shot at 9GHz, with the team hitting 8.9GHz and coming up a bit short – and running into hurdles such as a frozen USB port that disabled the keyboard and wasted time. In the end, there was only enough liquid helium left for a final shot – which saw the 9.008GHz speed achieved.
The overclocking session also produced a couple of other world records for the 13900K processor, namely PiFast being finished in 6.85 seconds, and SuperPI 32M being completed in 3 minutes 3.788 seconds.
Analysis: Psychological barrier broken
This is a big deal for Intel in the overclocking sphere because for the longest time before Raptor Lake arrived, AMD ruled the roost at the top of the supercharged CPU rankings. AMD’s FX 8370 was the champ, to be precise, with an overclock of 8.722GHz, but its long reign was ended recently by Elmor (who was involved in this new overclock) who hit just over 8.8GHz, and has now pushed past the 9GHz barrier.
The 9GHz mark is one of those psychological milestones that make you think it might not be too long before we see a processor that can actually top 10GHz; who knows? Certainly, there could be room for the 13900K to be pushed further, though it won’t get anywhere near that high, of course.
Overclocking like this is not relevant to the average user, given all the caveats which don’t just include ridiculous cooling, but also tweaking the CPU to turn off the efficiency cores and hyper-threading (so all that’s running is just 8 bare performance cores, with 8 threads). However, it does show that Raptor Lake and the Core i9-13900K in particular have some great overclocking potential, even for those using more mundane liquid or air cooling systems.
Don’t forget that Intel has the 13900KS waiting in the wings too, the special edition of the flagship, which the chip giant has already told us boosts to 6GHz out of the box, with no overclocking needed. We might see this as soon as next month, or certainly early in 2023 as promised by Intel.
- Details
- Tech Support by: Emerald City IT
- Support Field: Computer Repair and Tech Support
- Support Category: Computer Manufacturing
The GDDR6 accelerator-in-memory technology is designed for artificial intelligence and big data processing by bringing basic computational functions to memory chips.
SK Hynix's GDDR6-AIM chips can process data in memory at 16 Gbps, which makes certain computations up to 16 times faster than other methods.
The company said that such chips are designed for machine learning, high-performance computing, and big data computation and storage. These types of workloads may not always need truly serious computing performance, but transferring data from memory to a processor takes time and consumes loads of power.
SK Hynix claims the GDDR6-AiM chips run at 1.25V, and its usage reduces power consumption by 80 per cent compared to applications that move data to the CPU and GPU. Such chips are designed to be drop-in compatible with existing GDDR6 memory controllers, so it should be possible to use them even on existing graphics cards to increase their performance in AI, ML, Big Data, and HPC workloads.
The company is not the only one playing around with processing-in-memory (PIM) technology. Samsung has demonstrated its HBM2 and GDDR6 memory with embedded processing for about two years, but no one seems to be that interested yet.
SK Hynix also plans to demonstrate its new HBM3 memory devices 'with the world's best specification for high-performance computing.'