We are specialists in quality batteries. We have batteries, chargers and accessories for everything you can think of. Low prices, big inventory, expert advice. Find your battery here!
You’ll soon be able to lend your digital Nintendo Switch games
(Image credit: Nintendo)
Nintendo held what appears to be one of the last Nintendo Direct showcases dedicated solely to the Nintendo Switch, as theSwitch 2 unveilinglooms.
The Direct served as a reminder for a number of games coming to the original Switch and included a few interesting reveals as well. There were also some highlights including Metroid Prime 4 and Pokémon Legends Z-A.
One of the more interesting bits revealed in the trailer fest were Nintendo’s new “Virtual Game Cards.”
These new cards basically turn your digital games in to “game cards” that you can lend to family group members for up to two weeks at a time.
The new feature also enables you to move games across Switch systems. Basically, Nintendo finally has a library that you can take with you where you “eject” digital games from one system and can play them on another. We imagine this will come in handy if you want to transfer your library from the Switch to a new Switch 2 system.
Virtual Game Cards are coming in April via an update and they’re going to be available on both the original Switch and the Switch 2.
Beyond Metroid Prime 4: Beyond and the next Pokemon title, the Direct showcased a number of new games from remasters and remakes to indie titles and sequels.
The big two are expected at some point in 2025, though Nintendo didn’t provide a more concrete date than “late 2025” for Pokemon Legends Z-A. Metroid Prime 4 is still listed as just “2025.”
There was also the Nintendo Today! app meant to let you find the “latest news, videos, and comics of your favorite Nintendo games and characters from select franchises.” This app is supposed to be available starting today and requires a Nintendo Account to use.
For many years, purchasing a laptop usually meant deciding between a Windows and a Mac. The popular old Apple ad campaign called “Get a Mac” personified the two types of computers and clearly showed the distinctions between them (well, with an obvious bias towards Macs).
They’ve always been seen as geared towards specific personas for specific purposes, which gave both companies a distinct audience.
In 2011, however, Google launched the Chromebook, targeting the education market. After all, they were part of a pilot testing program in schools in the US to analyze their compatibility with institutions and remote work.
Soon enough, they became a viable option for students and businesses because of their seamless features and simplicity. This turned down the volume on the Windows vs. Mac debate, as there were (finally) other choices on the market.
While Macs are in their own niche, Chromebooks are often compared to Windows because of their potential audience and price range.
Macs are considered some of the most expensive laptops on the market, with the highest-end models costing well over $3,000. Windows has a much broader range, with budget models starting at just a few hundred. Meanwhile, you can get a Chromebook from $100 to $250.
But other than price, what else would encourage someone to pick a Chromebook over a Windows-based laptop?
While Windows laptops rely on a robust operating system, Chromebooks have minimal hardware requirements, which is one of their biggest advantages. Without having to think too much about whether you have enough RAM, storage space, or processing speeds to tackle all the tasks you need to handle, Chrome OS doesn’t demand a heavy load in the first place.
That means you can browse the web, watch videos, check your email, do some light gaming, access a casino that is a big name in CA online gambling, and complete many productive tasks smoothly without worrying about extra processes in the background.
Thanks to this, you enjoy faster performance and a no-frills experience, perfect for those who aren’t particularly tech-savvy or who don’t need a laptop for mammoth tasks like video editing or 3D rendering. As a bonus, the need for less powerful hardware means Chromebooks come at a much lower cost than Windows laptops.
As a Google-made device, Chromebooks naturally integrate with the entire Google ecosystem, making them a great choice for users who already use its suite of services. Rather than disrupting or disconnecting from your existing workflow, all your documents and files can automatically sync across your devices and platforms.
Just like Apple has its own proprietary apps and software in its own ecosystem, Chromebooks let you seamlessly access Google apps from your laptop and other Google devices. Google Assistant is also built into the laptop, letting you use your voice to do anything from setting reminders to navigating the internet.
What’s particularly appealing about Chromebooks, in contrast to Windows laptops, is the overall user experience. Windows interfaces can often be cluttered and intricate, exposing many of the backend processes that most users don’t need access to.
Chromebooks cut out that noise and focus on a minimalist experience that caters to a wider range of age groups and a more casual audience. With the essentials readily available and a highly visual experience, users can get right into their tasks without dealing with complex navigation.
Chromebooks have a familiar Windows-esque interface featuring the classic taskbar and desktop and the simplicity of macOS.
However, they stand out because they leverage a limited operating environment, Chrome OS, which is designed to handle essential tasks efficiently. Users don’t have to worry about viruses either, as it boasts sandboxing, isolating each app and process from each other for extra protection.
They Have a Cloud-Based Focus
Chromebooks also come with a cloud-based experience, revolving more around web and cloud-based apps rather than built-in apps. This reduces the burden on your local storage, which contributes to the minimal hardware piece mentioned earlier.
Google’s services have long been reliant on the cloud, which can make managing digital files a lot easier. Rather than store everything locally, the cloud-based approach lets you access everything from any device, so if your laptop dies or a file gets deleted from your desktop—you’ve got your backup ready to go at any time.
Every document gets updated and saved in real time, giving you the peace of mind you need, especially when dealing with important files. If you happen not to have access to the internet at the moment or it gets cut off, most Google apps have offline capabilities that allow you to sync back up once you’re reconnected.
These features are a real asset when it comes to work, school, and personal projects, which is why so many in that target audience turn to Chromebooks.
They Provide Android App Support
Speaking of apps, you’re not only limited to what’s pre-installed or available on the Chromebook when you purchase the laptop.
Chromebooks can run Android apps downloaded from the Google Play Store, which means you can explore a variety of third-party apps to enhance your workflow, whether in the creative or productivity genre. Already have existing apps you use on another Android device?
You can download these onto your laptop too, and sync your experience across devices. Transitioning from your tablet or smartphone to your laptop is convenient, with cross-platform capabilities to help you stay connected.
MSI’s PSUs appear to have been engineered with the RTX 40-and RTX 50-series GPUs as the primary target.
(Image credit: MSI)
Since the16-pin connectorwas introduced and later revised, melting adapters and PSU/GPU-side connectors on some of Nvidia’s mainstream GeForce gaming GPUs have been rampant. Two of MSI’s recently launched high-wattage PSUs interestingly feature two12V-2x6connectors while providing only one standard 8-pin connector (viaOC3D). Unless you get a 12V-2×6 to 8-pin adapter, these PSUs are incompatible with most modern-day GPUs from AMD, Intel, and even Nvidia (RTX 30 and prior), apart from a fewspecific variants.
The industry (or, more specifically, Nvidia) made the shift towards 16-pin connectors to reduce cable clutter and meet the ever-increasing power demand of new GPUs. After the first GPU meltdown wave hit, the 16-pin standard was revised with pin length changes and is now known as 12V-2×6.Reports of meltingRTX 50-series GPU connectors and adapters have been making rounds on the internet, even with the improved standard. This time, it is believed Nvidia’s power delivery design might be partially responsible.
It seems the industry is catching on to the 16-pin standard, even at the cost of reduced compatibility with other GPUs. MSI’s MPG A1000GS and MPG A1250GS power supplies, targeted at high-wattage GPUs, include two native 16-pin connectors and only a single 8-pin (6+2) connector. If you buy an additional 6+2 connector separately, you can only use a single 8-pin for the CPU, potentially hamstringing CPU performance.
Theoretically, a 225W TGP GPU or lower, like theIntel Arc B580,can easily be powered by one 8-pin connector. However, AMD’sRadeon RX 9070series cannot, as these GPUs typically require two or three 8-pin connectors.
To be clear, this isn’t the first time we’ve seen such an implementation.Galax’s GH1300also shipped with a dual 16-pin design, but that PSU was a one-off and unleashed to tame theRTX 4090 HOFthat could quickly chug up 1kW of power. As of writing, there isn’t any RTX 50 series GPU with multiple 16-pin connectors, so the full potential of these PSUs remains untapped. Nvidia recommends a 1,000W (minimum) PSU for theRTX 5090. This raises concerns about the MPG A1000GS, which is borderline on recommended specs yet ships with two 12V-2×6 connectors.
All things considered, equipping PSUs with multiple 16-pin connectors is counterintuitive for most consumers, especially given the standard’s original goal to simplify cabling. For maximum user flexibility, a PSU design should include one 12V-2×6 connector alongside multiple 8-pin connectors to accommodate high-power GPUs from either brand.
Toshiba has opened a new lab for advancing HDD tech
(Image credit: Toshiba)
Toshiba’s new European HDD Innovation Lab can improve storage tech
Lab offers architecture testing, proof-of-concept setups, and benchmarking
Toshiba claims combining dozens of HDDs can boost overall performance
Toshiba Electronics Europe has opened a newHDD Innovation Labat its Düsseldorf site, expanding its storage evaluation services across Europe and the Middle East.
The new facility (it already has a smaller one in Dubai) is designed to support customers and partners in optimizing hard disk drive configurations for a range of applications, including cloud storage, surveillance systems, and NAS environments.
Toshiba’s lab will focus on assessing HDD setups for broader IT systems such as storage area networks (SAN), providing a space where hardware configurations can be tested and refined. It will be able to evaluate customer-specific architectures and offer a platform for proof-of-concept testing and performance benchmarking.
“This new HDD Innovation Lab represents a significant leap forward in providing bespoke solutions and advancing HDD technology,” said Rainer Kaese, senior manager for HDD business development.
“It demonstrates Toshiba’s commitment to drive the industry forward and support customers and partners with technical expertise and resources. We look forward to strengthening existing collaborations and exploring the future business opportunities the new facility will bring.”
To carry out these evaluations, the lab brings together servers, JBoDs, chassis, controllers, cables, and a variety of software tools. It also includes equipment to accurately measure energy consumption.
While SSDs have a clear speed advantage over HDDs, they are expensive and, according to Kaese viaBlocks & Files, “The flash industry is not be able to manufacture enough capacity to satisfy the growing demand, and still will not be for a significant while.”
The solution to that problem, Kaese suggested, is to bunch HDDS together.
“We have demonstrated that 60 HDDs in ZFS software defined storage can fill the entire speed of a 100GbE network,” he said, adding, “[We] found that a typical configuration of four HDDs (ie. in small Soho NAS) can fill the 10GbE networks. 12 HDDs match the 25GbE of Enterprise networks, and 60 HDDs would require high end 100GbE network speed to unleash the full performance of the many combined HDDs.”
Beyond technical testing, the HDD Innovation Lab aims to support knowledge sharing. Insights from evaluations will be passed directly to customers, and Toshiba says it will conduct its own internal assessments of its HDD product lines, publishing the findings as whitepapers and lab reports.
In today’s gaming world, multi-platform titles are becoming the standard. Some players are now using Boosteroid to stream PlayStation exclusives on Xbox consoles. Only purchased PC ports are compatible with the cloud gaming service.
PS5 logo shown on Xbox using Boosteroid cloud gaming (Image source: Xbox Wire, Sony PlayStation)
Microsoft loyalists may feel betrayed as more Xbox titles becomeplayable on the PS5. However, some gamers are gaining revenge by playing former PlayStation exclusives on Xbox consoles. TheBoosteroidcloud gaming service allows streaming select PlayStation games in the Edge browser. Of course, there are caveats, like needing to own a PC port of any PS4 or PS5 title.
Just asXbox Gaming has gone cross-platform, Sony has increased its presence on PCs. Titles, includingGod of War Ragnarok,Last of Us Part 1, andHelldivers 2, are available on Steam or the Epic Games Store. Still, Sony has tried to prevent any cloud gaming service from streaming these titles on Xbox consoles.Nvidia GeForce Nowhas an Xbox app, but the service doesn’t offer PlayStation exclusives.
God of War Ragnarok playing on Xbox console (Image source: screenshot, Console Gaming subreddit)
Instead of an app, Boosteroid relies on the Edge browser, accessible on Xbox consoles. Gamers must subscribe to the service for $7-$10 monthly and own the PlayStation game on Steam or another compatible marketplace. With a fast enough internet connection, gamers can play titles likeGod of War Ragnarokin 4K and up to 120 fps. Some gamers have postedsuccess stories on Reddit, reporting reasonable latency.
Critics may argue that using Boosteroid on an Xbox isn’t that appealing. Gamers still need to buy a PlayStation exclusive on PCs and a monthly subscription, so they aren’t saving much in costs. Also, the selection of compatible titles doesn’t include console-only games likeGran Turismo 7. Nevertheless, for cloud gaming supporters, it bypasses the need for a powerful gaming PC.
Considering the GeForce Now restrictions, the above workaround may not work much longer. Until then, the Xbox Series X|S has unexpectedly become more versatile.
Cloud hosting provides scalability, redundancy, and can be more cost effective
Image credit: Pixabay (Image credit: Pixabay)
Cloud hostingis more than just a marketing term thrown around. It’s a type of hosting that provides scalability, redundancy, and can be more cost effective for the right users. Unfortunately, the phrase cloud has been overly used and misrepresented so understanding what it is and what benefits it has is not the easiest thing to unpack. I’ve written this explainer to help you understand what cloud really is and whether you need it. This can help you pick thebest web hostingfor you.
What is cloud hosting?
To understand cloud hosting, we first need to understand traditional hosting. In traditional web hosting you rent a server that contains resources. Resources are the things your website needs to run. The three main resources are CPU, RAM, and storage.
You either share these resources throughshared hosting. Share them virtually through through VPS hosting or, don’t share them at all with dedicated hosting. The main characteristic here is that this is one physical server with a direct link between the website and the resources in the server.
With cloud hosting the resources don’t belong to individual servers. Instead the resources exist in a pool and websites can use what they need when they need it.
You can think of shared hosting like sitting around a table with a bowl of food in the middle. You don’t have your own plate everyone just grabs from what’s available. If there is a greedy eater at the table. You’re going to get less, and you might end up hungry.
VPS hostingis the same scenario but you have your own plate. At the beginning of the meal everyone takes what they want and puts it on their plate. This is much better because greedy eaters won’t take food from you and you always know how much food you have.
Dedicatedis the same scenario but you’re sitting at the table by yourself. Great if you have a big appetite but can be very wasteful.
Cloud is different. Instead of food being served in the middle of the table and people taking what they want at the beginning or free-for-all style cloud is like having your own waiter. Ask for something on the menu whenever you want it and the waiter will bring it immediately.
The main benefit of cloud hosting is scalability. By this I mean that you can add and remove resources for your site at the click of a button or automatically. The resources in cloud hosting are not attached to one server so you can pull resources from the entire network.
With traditional hosting you would need to physically add or remove a CPU, RAM, or storage when you needed it which takes time, is risky, and can result in downtime.
Redundancy is another word for duplication. In cloud hosting your site and backups can be duplicated and stored in many different locations (called geo-redundancy). So if the storage fails, no problem, it’s already ready to be used in a different location. If a disaster happens that causes the power to be cut off from a datacenter or a datacenter is destroyed entirely. No problem! Your site was never offline because there was another copy in another datacenter.
Cost-effectiveness
It seems counter-intuitive that a type of hosting that has additional features can be more cost effective but it can. At scale, it’s cheaper to use cloud resources for redundancy and scalability because you don’t need to own and maintain additional equipment. The cost effectiveness is only there at scale though. Unless you will benefit from scalability and redundancy, there is no need to pay extra for cloud.
Who needs cloud hosting?
If your website doesn’t need many resources and doesn’t see big fluctuations in traffic like a blog or portfolio, shared hosting is absolutely fine. It’s the cheapest type of hosting if you’re not very technical and you won’t suffer any of the negative aspects of shared hosting.
If you need to guarantee resources for your site and you know the resource requirements are the same every month, then using a VPS is a better choice than shared hosting. So, thebest VPS hostingis a good option for established sites that get frequent regular visitors that don’t need the benefits of redundancy.
Dedicated is for when you need additional security requirements and/or have a website or websites that demand a lot of resources. Dedicated hosting is not that popular anymore and is generally being replaced with cloud hosting because it provides all the benefits of dedicated hosting but cheaper and also has some additional bonusses. Unless you’re dealing with highly sensitive information or your business needs are super high like a web developer managing multiple sites and applications, there isn’t much need to think about dedicated hosting.
You can benefit from cloud hosting in the following scenarios
New businesses
When starting a new business the last thing you want to be dealing with is hosting issues. You also wont know for sure how many resources you need until you start getting site visits and then, what happens if you get more than expected? Opting for cloud hosting at the beginning can save you hassle and money in the long run. Once you have established what the regular visits to your site are and what resources they require you can try a VPS server to save money;
Online stores
Servers for online stores are doing a lot more heavy lifting than servers for blogs and portfolios due to larger databases, unique users, shopping carts, and taking payments. When a site is the main revenue source it’s also a lot more important to make sure it’s online all the time. Online stores also get higher fluctuations in traffic from things like promotions or festivals.
Cloud hosting helps alleviate these pressures on servers to ensure that an online store is always ready to go and always has the best user experience.
There are plenty of studies that show that slow sites lose revenue so spending more on cloud hosting can pay for itself in additional sales and higher ranking in searches.
Types of cloud hosting
You might see cloud hosting packaged in many different forms. Some is just fancy marketing. Some is genuine. To help you navigate the market I’ve broken down some of the key cloud packages.
It seems odd to say shared cloud because it’s technically not shared in the traditional sense and in another sense all cloud is shared. So, what is shared cloud?
Shared cloud is essentially a virtual server with many websites sharing that virtual server. It’s the same as the example I gave earlier of people eating at a table from a central pot in the middle. An amount of cloud resources are allocated to that space and the websites have to share whats there.
Many hosting companies now are based on cloud infrastructure and sell shared hosting this way, Some still call it shared, some call it shared cloud.
If shared servers are managed properly then from a users perspective it’s essentially the same. If a hosting company is charging more because their shared is cloud, check what cloud benefits the plan has. Are there offs-site backups? If not, then it’s not worth the extra price tag.
Cloud VPS
Cloud VPS is the same as above. A virtual environment is created and resources are reserved for it. This is the most common form of cloud hosting and usually comes with all the benefits of cloud like geo-redundancy and scalability. This is the most suitable type of hosting for online stores and growing businesses, or sites that require a custom server environment.
Cloud hosting round up
To summarise, cloud hosting is a type of infrastructure on which hosting plans can be created and managed. Cloud hosting adopts traditional hosting methodology but brings with it scalability and redundancy that traditional hosting cannot provide.
Cloud hosting is excellent for websites that have large fluctuations in site visits, new businesses that don’t know what their baseline requirements are, and for online stores that depend on excellent user experience and being always online.
Recently,Intelresearchers published ablog postthat points to a more modular and sustainable future for laptops and mini PCs.
The blog, envisions concepts for laptops that feature standardized motherboards and IO ports for a variety of laptop types including fan-less, single-fan and dual-fan versions.
Part of the argument from the researchers — Roberta Zouain, Intel’s Sustainability Product Strategy and Marketing Manager; Reshma PP, Director of System Design; and Gurpreet Sandhu, Vice President of the Platform Engineering Group — is to reduce e-waste, which they saygenerates over 60 million tonson a yearly basis, with less than 25% of that actually collected for recycling.
The drive for sustainable PC design appears to partner with the right to repair movement. This movement argues that people should be able to repair or upgrade their devices without being penalized for not buying the latest gear or restricted from accessing components and tools to do so.
There are some modular laptop designs already in the wild.Framework has been making customizable gaming PCsfor awhile.Asus announced at CES the ROG Strix Scar 16 and 18which features tooless upgrades for the SSD, no screwdriver required.
What Intel is arguing for is more open source and standardized from the factory floor to repair shops and the home user. This would mean that anyone could find components and tools to repair or upgrade their laptop.
One big change proposed by Intel is a standardized motherboard that is designed into three segments with the motherboard and system-on-a-chip separated from the IO ports.
“The creation of universal I/O boards (left and right I/O boards) that can be utilized across various platforms or market segments leads to cost savings by streamlining the duration of the design cycle and minimizing the engineering investment required,” the authors write.
(Image credit: Intel)
The paper goes on, “This innovative structure allows for targeted upgrades, repairs, and replacements, significantly extending the device’s lifespan and reducing electronic waste.”
The company also proposes a similar redesign for mini-PC, again separating various components into modules. These changes include removable CPU and GPU chips, hot-swappable storage and even repairable Thunderbolt modules.
“These modules significantly reduce repair costs and simplify the repair process in the event of port or connector damage at the end-user level,” they write.
One thing that would need to be fought against here is how components are installed. In a right to repair favored world, any component from the display to a random USB port could be replaced with new components that plug or screw in.
Unfortunately, in some devices (looking at you Apple), components are glued in and nearly impossible to remove without some serious know-how or technical support.
In a race to have the thinnest laptops in the world (outside of the gaming market), I wonder if modular PCs would make sleek laptops a thing of the past. Personally, I would take a chunkier device over a thin laptop if I could easily swap out my RAM or put in the latest CPU to get the most of out my PC without buying a new hunk of plastic.
Unfortunately, probably not for a while. Unless Intel is heavily working on this in the background, the blog is a proposal and Intel doesn’t make laptops or mini PCs.
Instead, it would need to be adapted in agreements with Intel partners likeDellorAsus.
Assuming Intel is working on this, we would guess the earliest you would actually see modular PCs would be a year, but that may be wishful thinking.
A motherboard is one of the most important components to purchase when building a new PC as it’s the foundation for all of your machine’s parts to interact, but it’s far from a one-size-fits-all solution in 2025. There are four motherboard sizes available, each with different strengths and weaknesses, with smaller and larger fiberglass rectangles used for different purposes.
As such, there’s no easy answer for what thebest motherboardcan be, so it’s vitally important to know the four commercially available sizes, rough pricings, and the sockets available for some of the best processors on the market. After all, compatibility is vital in 2025, particularly if you’re eyeing up some of thebest DDR5 RAM,best graphics cards, and otherPCIe 5.0 componentsfor the build.
From Mini-ITX models up to their EATX counterparts, Batterymap goes into detail about motherboard sizes in 2025, which companies are supporting them, the current-generation sockets, and everything else you need to know so that you can build your new machine with confidence.
There are four motherboard sizes available in 2025 from major manufacturers such as Asus, Gigabyte, and MSI, among others. These areMini-ITX,MicroATX,ATX, andE-ATX. As the naming conventions imply, the two smallest models are the mini-ITX and MicroATX options, which measure 6.7 x 6.7 inches and 9.6 x 9.6 inches, respectively.
The most common motherboard size is ATX, which measures 12 x 9.6 inches. For those that need a little extra headroom for additional components, E-ATX (Extended ATX) offers the largest amount of space with 12 x 13 inches of fiberglass available. As can be evidenced from the motherboard size chart above, there’s quite a dramatic difference in scale between Mini-ITX and MicroATX, with more of a subtle difference when comparing ATX and EATX, generally with the latter having more room on the right-hand side.
In terms of use cases, both Mini-ITX and MicroATX motherboards are favored for small form factor (SFF) work and gaming computers, such as those you may use in a low-profile office setting or a console-sized rig for living room use. As the smaller scales suggest, you can expect fewer PCIe lanes for connecting components (and a more cramped building experience) depending on the PC case you’re using. This also means limited room for some of thebest CPU coolers(and more limited airflow in general), so this is something paramount to consider before you invest.
As a frame of reference, Mini-ITX is around two-thirds the size of an ATX motherboard. Historically, the former was designed for lower power consumption and efficiency when compared to the more common sizes. However, in 2025, some manufacturers have started catering to gamers and power users in this smaller size, though you (typically) will pay a premium in comparison.
Starting out with Mini-ITX motherboards, these models tend to be the more expensive way of building a small form factor (SFF machine) when compared to microATX which is (generally) considered to be a more wallet-friendly option. This is consistent in the prices that you’ll expect to pay between Mini-ITX and microATX, as reflected in today’s popular models from major manufacturers.
Socket AM5 options sell for more of a premium in the Mini-ITX form factor, as can be evidenced by the ASRock A620I Lightning Wi-Fi ($139.99), Gigabyte B850I Aorus Pro ($279.99), MSI MPG B650I Edge WiFi ($299.99) with a rough range of the budget and more premium offerings. In contrast, MicroATX equivalents are (generally) more affordable well under the $200 mark, including the ASRock B650M Pro RS ($139.99), MSI Pro B650M-A Wi-Fi ($159.99), and Gigabyte B850M Gaming X Wi-Fi 6E ($179.99).
Price and size aside, another major difference between Mini-ITX and MicroATX is the number of ports and connections available on the motherboard. Due to its cramped 6.7 x 6.7 inches available, Mini-ITX mobos usually only have a single PCIe x16 slot for the graphics card and up to two M.2 SSD ports. Depending on the manufacturer, there may only be two RAM slots instead of four, and the rear I/O could be more cut down by comparison, resulting in fewer USB ports and other connections.
MicroATX, in contrast, usually features four RAM slots, two PCIe x16 slots, up to four M.2 ports, and vastly more expansive options for its rear I/O, because you’ve got 43% more space on the motherboard. In theory, MicroATX seems to be a superior option (being cheaper and offering more); however, it’s also larger in a way that makes certain small form factor (SFF) builds harder to achieve, being less slick and compact as a result. You should make your choice depending on your use case; do you need more than dual-channel RAM, two M.2 ports, a graphics card slot, and a basic rear I/O? If so, maybe the silicon needs to be larger.
When compared to the two smaller motherboard sizes, ATX and E-ATX variants do not seem as drastically different on the surface. However, the extra space afforded by the 35% more space can be staggering, depending on the hardware you’re planning on using. While ATX motherboards traditionally feature up to four PCIe x16 ports and four RAM slots, E-ATX versions can boost things up to as high as eight PCIe x16 ports with the potential for as much as eight RAM slots (though this is unlikely in 2025 compared to historical examples).
The major drawback of E-ATX motherboards is their higher price tag when compared to ATX offerings, as well as more limited availability. While still supported for today’s current AM5 and LGA 1851 sockets, you’re going to pay a heavy premium for the extra components space on the motherboard itself. Some popular E-ATX options can elipse their ATX counterparts, as can be seen with the pricing of the ASRock X670E Taichi ($449.99) and the MSI MEG X670E ACE ($499.99). In contrast, similar ATX models are much cheaper, like the ASRock X670E Steel Legend ($259.99) and the MSI MAG X670E Tomahawk Wi-Fi ($239.99).
With that said, is the extra real estate worth potentially paying double (or more) when compared to an ATX motherboard? It will ultimately depend on the use case. The power user will get the most out of the larger board space, which can be particularly important if you’re thinking of forging a high-end creation or gaming PC featuring a custom loop in a far larger E-ATX compatible PC case, complete with bleeding-edge components. It all comes back to airflow and the space required; E-ATX will afford you as much room as possible, provided you can stomach the sticker price.
It’s commonly been debated that gamers will not see the benefit of the extra data lanes afforded by an E-ATX motherboard. Instead, those planning a server rig, a deep-learning machine, or something more granular might find the extra slots and connections of vital importance. Do you need more expansion slots? Then E-ATX may be the solution here, otherwise, ATX will satisfy the vast majority of PC users for just about any task imaginable while also being far more affordable and available.
(Image credit: Gigabyte)
Which motherboard should you buy?
We’ve outlined the four different motherboard sizes available in 2025, their use cases, price differences, and varying features as they stand right now. Choosing a motherboard is not as cut and dry as you would expect, and that’s why you need to visualize your rig before putting any money down. Consider the chipset of the board for starters. Will you use AMD’s AM4 or AM5 platform? Similarly, will you pay the premium investing in Intel’s latest LGA 1851 socket instead of sticking with the older (and arguably better) LGA 1700 platform that hosted Alder Lake and Raptor Lake?
All four motherboard sizes support the latest and greatest of today’s processor technology, just in different ways. Mini-ITX is pricey as you’re paying extra for the sleek form factor, whereas MicroATX provides a similarly small (but far cheaper) experience that usually does not boast the same features by comparison. ATX is the most widely used and commonly stocked motherboard, but power users may need the added versatility of an E-ATX board if they’re building a server or a dedicated workstation, even if gamers may not feel the added benefit.
Compatibility is the most important factor above all. As such, we recommend dedicated tools such as PCPartPicker when virtually pricing and sizing up a machine; you’ll get to see which motherboards support your chosen CPU, GPU, RAM, M.2 SSD, and other components efficiently, as well as get suggestions for compatible cases. Building a PC the size of aPS5orXbox Series Xmay be an exciting idea, but you may need a Mini-ITX motherboard and SFF components, which can boost the price while making things cramped to build in. The motherboard is the foundation of your whole machine, after all.
Microsoft got the science wrong, according to one physics and astronomy professor.
(Image credit: Microsoft)
Microsoft’s recently announced Majorana 1 quantum chip, which it claims uses a Topological Core architecture capable of packing a million qubits into a single quantum processor. However, some scientists are skeptical of the results delivered by Redmond. University of Pittsburgh Professor of Physics and Astronomy Sergey Frolov toldThe Register, “This is a piece of alleged technology that is based on basic physics that has not been established. So, this is a pretty big problem.”
Many scientists have their reservations about Microsoft’s breakthrough, with Frolov even going as far as saying that the Majorana 1 chip is “essentially a fraudulent project.” He said that these concerns have been going on for years, especially as the company has previouslyretracted a 2018 paperit published about Majorana particles in 2018. Other scientists also expressed concerns because Microsoft’s submission was missing some crucial details.
Aside from that, Professor Frolov said that he talked with fellow physicists and researchers whom Microsoft has shared its data with, and he said to The Register, “People were not impressed and there was a lot of criticism.” The company is set to present its paper and more recent developments at the American Physical Society (APS) Global Physics Summit on March 15 to 20, but he’s still skeptical that this will clear the air on Microsoft’s claims.
Frolov said that Microsoft’s planned presentation next week won’t answer all the questions and concerns raised by experts based on what his contacts told him. He also added that the company’s Majorana results are questionable — and without that, then the topological qubit that it claims will not work.
But whatever the case, the company’s presentation in the coming days will certainly reveal more information. And, in the end, if this quantum processor does not work, it will not have any commercial value, putting Microsoft at a disadvantage as it’s essentially throwing away money and resources it could have used to pursue a different path to achieve quantum computing.
In its defense, Microsoft’s researcher Chetan Nayak pointed out that they submitted the paper to Nature in March 2024, and that it was published eleven months later in February 2025. He said that the company has made significant progress since then, which will be presented at the American
Microsoft said that its Majorana 1 quantum chip can “observe and control Majorana particles to produce more reliable and scalable qubits, which are building blocks for quantum computers.” This particle was first theorized in 1937, but there’s still no definitive proof that it even exists. That’s why some scientists find it unbelievable that the tech giant has detected and put them to use in its quantum processor with its eight topological qubits.
Quantum computingis set to deliver processing power that is light-years away from what classical computing can achieve. However, this technology is also incredibly complex, and over 40 years of research still hasn’t resulted in a commercially viable quantum chip. Aside from Microsoft, several companies likeIBM, Quantum Brilliance, QCI, and more are working on the problem, with each one taking a different approach.
Given humanity’s ever-increasing appetite for processing data, quantum computing is expected to have a market value of around $20 to $30 billion by 2030. This is likely to increase, especially as AI demands more and more processing power, with many companies investing billions into larger and more powerful data centers. So, the first company to come up with a commercially viable solution to the quantum computing problem will likely reap billions, if not trillions, of dollars in returns.
Asus may have to offload some of the costs to its clients.
(Image credit: Shutterstock)
Although Asus has been proactively preparing for potential geopolitical tariff changes, it warned that it may have to increase its prices later this year as it sets up new production facilities outside China.
“We will try to limit these costs to within a reasonable level. However, as we make further adjustments to production lines, it may become possible that we need to offset some of these costs to our clients,” an Asus co-CEO said at the company’searnings conference callwith investors and financial analysts.
The Asus executive continued, “And right now, we are seeing that some brands are already starting to make adjustments to their retail prices to cover their costs. But for Asus, we will do our best to limit the impact of these changes for our customers, we will try to maintain our offer as the most competitive in terms of both service quality and pricing.”
Production Shift Away From China To Avoid U.S. Tariffs
To avoid tariffs expected to be imposed by the new U.S. government, makers of PCs and computer components are shifting production away from China. Large PC makers, such asDellandHP, have been turning their production to other countries for years now and have resilient supply chains that can at least lower the impact of tariffs. Other PC makers are turning their capacities now. However, shifting takes time, so some tariffs must be paid. Furthermore, setting up new production capacity costs money, impacting companies’ profit margins or prices for the end-user.
Asus primarily plans to minimize customer impact by proactively managing tariff effects. Specifically, Asus intends to absorb impacts internally by adjusting production and inventory management strategies (e.g., shifting manufacturing locations globally).
Maintaining pricing competitiveness is a priority, indicating that potential price increases would be carefully considered and kept minimal. However, Asus acknowledged that if tariff costs become significantly high or persist, they may partially pass some costs to customers, but only to the extent necessary.
“We are seeing that some brands are already starting to make adjustments to their retail prices to cover their costs,” the co-CEO said. “But for Asus, we will do our best to limit the impact of these changes for our customers, we will try to maintain our offer as the most competitive in terms of both service quality and pricing.”
In short, Asus will prioritize maintaining competitive pricing, potentially accepting some margin pressure in the short term. Still, it does not rule out limited price adjustments if tariff impacts become significant.