Rumored NVIDIA Next Gen-GPU codenamed Hopper gets a registered trademark

Published by

Click here to post a comment for Rumored NVIDIA Next Gen-GPU codenamed Hopper gets a registered trademark on our message forum
https://forums.guru3d.com/data/avatars/m/45/45709.jpg
I am a big fan of Dennis Hopper:), but Miss Grace is also highly appreciated. Possibly my next graph. card...
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I'm a little surprised they didn't choose Hopper sooner. Also... if Nvidia is concerned about this sort of thing, shouldn't they be trademarking as many of these names as they can?
DLD:

I am a big fan of Dennis Hopper:), but Miss Grace is also highly appreciated. Possibly my next graph. card...
lol imagine that, if Nvidia went from famous physicists to Hollywood actors.
data/avatar/default/avatar29.webp
duplicate
data/avatar/default/avatar04.webp
'MCM design scale less for GPUs, think a little about SLI for example and the problems that come with it.' I doubt this will an issue at all as the modules themselves would be transparent to the driver and appear as a single GPU as they are connected. Much like AMD's MCM CPU's appear as a single CPU. https://trademarks.justia.com/media/image.php?serial=86795958 Fairly popular Trademark. https://cdn.wccftech.com/wp-content/uploads/2017/10/AMD-Navi-GPU-Launching-in-2018-Could-Be-MCM-Based.png
https://forums.guru3d.com/data/avatars/m/269/269560.jpg
"Shopper" rather ?
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
AMD also ought to be going for the MCM design, assuming they are still planning to make bigger GPUs in the future. They even have practical experience already from the CPU side, although they naturally don't have Nvidia's budget. It'll be interesting to see how it's going to turn out from both camps. Nvidia is basically also competiting against itself, its own previous generation flagship, so they can't have a weak MCM opening, unless they go for a much cheaper price. But even so people do expect a cool flagship from Nvidia.
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
"MCM design scale less for GPUs, think a little about SLI for example and the problems that come with it..." I don't think one can make that statement because what GPUs utilizing an MCM have ever been released? SLI cards don't qualify because they are not in a module and that is a key to the design - the module aspect of it. If done right, all of the issues with SLI should be resolved with an MCM design. The keys will be how the functionality is partitioned across modules and how well inter-module latency can be hidden. AMD does it fairly well so hopefully Nvidia can learn from them.
https://forums.guru3d.com/data/avatars/m/72/72485.jpg
So just how far off are we from seeing one of these gpus hit the market? Considering Ampere hasn't even been released yet.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
schmidtbag:

I'm a little surprised they didn't choose Hopper sooner. Also... if Nvidia is concerned about this sort of thing, shouldn't they be trademarking as many of these names as they can?
Trademarking these days doesn't last unless you can prove you have commercial interest in using it within a certain time. Just recently (November iirc) a company lost it's trademark conflict in court because it could not prove (and until then never did) that they wanted to use it soon. Can't finde the link or news though, sry. Maybe Hopper's what's coming after Ampere? Ampere in 2020 and Hopper in 2021 or so... or server grade hardware like Volta (which we never saw for gamers).
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
HeavyHemi:

'MCM design scale less for GPUs, think a little about SLI for example and the problems that come with it.' I doubt this will an issue at all as the modules themselves would be transparent to the driver and appear as a single GPU as they are connected. Much like AMD's MCM CPU's appear as a single CPU.
Not quite the same... When a CPU core is processing some data, it generally only deals with its own L1, L2, L3 cache data, and doesn't need to access other cores L1, L2 etc data. Whereas many GPU algorithms might need access to data which is not on the local GPU cache, so they must continually retrieve the data from other GPU caches, and this is where the difficulty, and the inefficiency lies. The cross cache communication times, and the efficiency of various GPU algorithms to minimize the amount of this data shuffling between caches is what the issue will be. Being transparent to the driver does not speed up this low level shuffling.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
fantaskarsef:

Trademarking these days doesn't last unless you can prove you have commercial interest in using it within a certain time. Just recently (November iirc) a company lost it's trademark conflict in court because it could not prove (and until then never did) that they wanted to use it soon. Can't finde the link or news though, sry. Maybe Hopper's what's coming after Ampere? Ampere in 2020 and Hopper in 2021 or so... or server grade hardware like Volta (which we never saw for gamers).
In Ireland, the global fast-food giant (and very litigious about their copyrights and names) McDonalds, lost a trademark case against an Irish fast-food place called "SuperMacs" which has been running for years here - the owner's name is MacDonagh, so obviously he had some right to use his own name in his fast-food restaurant - and the European court agreed with him! So the giant McDonalds lost the case. SuperMacs is even still able to sell burgers called "Mighty Macs" and stuff like that.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
geogan:

In Ireland, the global fast-food giant (and very litigious about their copyrights and names) McDonalds, lost a trademark case against an Irish fast-food place called "SuperMacs" which has been running for years here - the owner's name is MacDonagh, so obviously he had some right to use his own name in his fast-food restaurant - and the European court agreed with him! So the giant McDonalds lost the case. SuperMacs is even still able to sell burgers called "Mighty Macs" and stuff like that.
Going to Ireland, McDonald's really should have expected to encounter something like that.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Kaarme:

Going to Ireland, McDonald's really should have expected to encounter something like that.
Yep, but typical - they still tried to sue him and get him to change name of his entire chain of fast-food restaurants, and not be allowed to use his own name on them 🙄
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
geogan:

Not quite the same... When a CPU core is processing some data, it generally only deals with its own L1, L2, L3 cache data, and doesn't need to access other cores L1, L2 etc data. Whereas many GPU algorithms might need access to data which is not on the local GPU cache, so they must continually retrieve the data from other GPU caches, and this is where the difficulty, and the inefficiency lies. The cross cache communication times, and the efficiency of various GPU algorithms to minimize the amount of this data shuffling between caches is what the issue will be. Being transparent to the driver does not speed up this low level shuffling.
This is not true. Cache snooping is a key aspect of multi-core designs. A given core is not exactly accessing the other cores' caches but it has to be notified when a piece of data that both share is changed. Multiple GPUs in an SLI configuration do not. They work as independent islands. Cores in the same GPU share their caches at a block level where a block is a group of threads.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
how the trademarking this? when dish owns the trademark for hopper last i checked
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Gomez Addams:

This is not true. Cache snooping is a key aspect of multi-core designs. A given core is not exactly accessing the other cores' caches but it has to be notified when a piece of data that both share is changed. Multiple GPUs in an SLI configuration do not. They work as independent islands. Cores in the same GPU share their caches at a block level where a block is a group of threads.
I did mean caches and main memory from two different GPUs in an SLI configuration. ie. they are both modifying their own copy of the frame buffers, but then modern algorithms mean that one GPU needs the values of pixels modified by the other GPU in the other GPU memory - it becomes a constant copy of memory back and forth. Compared to a multi-core CPU which only modifies one main memory.
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
geogan:

I did mean caches and main memory from two different GPUs in an SLI configuration. ie. they are both modifying their own copy of the frame buffers, but then modern algorithms mean that one GPU needs the values of pixels modified by the other GPU in the other GPU memory - it becomes a constant copy of memory back and forth. Compared to a multi-core CPU which only modifies one main memory.
I apologize for the late reply. In some anti-aliasing algorithms they might need that data most of today's algorithms do not. For example, with super-sampling each pixel is computed multiple times (2-4 times), and the mean of the values is used for each pixel so the values of adjacent rows and columns are unnecessary. The data multiple GPUs share are the models' vertices and the textures and they can have their own copies. Since vertexes can be moved from frame to frame that information has to be shared since it relates to models. All of the rendering can be done independently.
https://forums.guru3d.com/data/avatars/m/189/189799.jpg
Hilbert Hagedoorn:

Remember a week or three ago when a rumor surfaced about a possible MCM prototype from NVIDIA called hopper? Well, that name is now a registered trademark, and guess who registered it? Yep, NVIDIA. ... Rumored NVIDIA Next Gen-GPU codenamed Hopper gets a registered trademark
hmm, can i ask, is there any Source for this? maybe a link? Thanks for the Article Hilbert 🙂