Could you recommend Pentium Prescott for Audio processing?
-
- Posts: 273
- Joined: Fri Aug 31, 2001 4:00 pm
Hey guys, i have a p4 with 2,4ghtz. but i want to upgrade my system. the p4 2,8ghtz prescott with 16/1024chache is pretty cheap here in germany.
can you recommend it or is better to go with a northwood?
the main reason i want to upgrade my system is that i have moogmodular which is so cpu intensive.
is there anyone with a prescott out there?
thx
can you recommend it or is better to go with a northwood?
the main reason i want to upgrade my system is that i have moogmodular which is so cpu intensive.
is there anyone with a prescott out there?
thx
I'm getting a Prescott-based system at work in the next few days so I can tell you then.
However, I think you're mistaken in thinking that this kind of upgrade will get you anything more than a few more voices of the Arturia plugin... I think it would be far more beneficial (and better sounding) to get some more Creamware DSP, and hook up Minimax to Modular3 and Flexor.
However, I think you're mistaken in thinking that this kind of upgrade will get you anything more than a few more voices of the Arturia plugin... I think it would be far more beneficial (and better sounding) to get some more Creamware DSP, and hook up Minimax to Modular3 and Flexor.
The Prescott (german: 'Presskopf'), is something I would never recommend to anyone - it's slower than Northwood at the same clock speed, and SSE3 is nothing to write home about (well, not right now - we'll see what happens in the future). The P4 architecture is stupid, and Prescott is about the most stupidly designed piece of silicone available today (except for Intel Itanic). That's why Intel want's to base it's future CPU's on the Pentium M design (Centrino, the core is Pentium3-based and designed by a company from Israel - lower clock speed, better performance).
I don't like Intel in general, but I think that's obvious - AMD64 rulez, ownez, whatever...
(don't get angry, I'm probably drunk...
)
I don't like Intel in general, but I think that's obvious - AMD64 rulez, ownez, whatever...


-
- Posts: 1963
- Joined: Tue Aug 19, 2003 4:00 pm
- Location: Bath, England
I think that code has to be compiled specifically to take advantage of those instructions anyway (& SSE2 too I believe).wsippel wrote:
...SSE3 is nothing to write home about (well, not right now - we'll see what happens in the future).
A "triumph" of marketeering over engineering!The P4 architecture is stupid,

Do you mean 'Titanic'?...(except for Intel Itanic).

Royston
<font size=-1>[ This Message was edited by: Counterparts on 2004-04-14 04:11 ]</font>
Hi Counterparts.
Of course, code needs to be compiled for SSE3. The problem is: SiSoft's Sandra is already compiled for SSE3, and there is no increase in performance compared to SSE2. Maybe that will change, AMD is integrating SSE3 soon, too...
The "triumph" was not that great recently, it's hard to tell your customers that GHz counts, and then you try to tell 'em that the 1.8GHz Pentium M is faster than any top-of-line Pentium4C HT 3.xx GHz...
And the Itanium is called "Itanic" by a lot of tech-savvy people (most developers of the Linux Crowd, SUN etc), and yes, "Itanic" consists of "Itanium" and "Titanic"...
Of course, code needs to be compiled for SSE3. The problem is: SiSoft's Sandra is already compiled for SSE3, and there is no increase in performance compared to SSE2. Maybe that will change, AMD is integrating SSE3 soon, too...
The "triumph" was not that great recently, it's hard to tell your customers that GHz counts, and then you try to tell 'em that the 1.8GHz Pentium M is faster than any top-of-line Pentium4C HT 3.xx GHz...
And the Itanium is called "Itanic" by a lot of tech-savvy people (most developers of the Linux Crowd, SUN etc), and yes, "Itanic" consists of "Itanium" and "Titanic"...

Perhaps what sisandra is using as test routines doesn't use the additional SSE3 extensions by nature? I'm sure there is documentation of this somewhere online (forums probably atm).
As for Prescott, Go for the Northwood and skip Prescott.
For those who care for my usual lengthy explanation, let's reflect back on the overall situation.
All AMD & Intel fanboys step down for a second, this is a game of leapfrog not underdog.
AMD's recent history shows a lot of success in taking existing architecture (x86) and building upon it, for instance their first release of the Athlon was quite powerful compared to the p3. Coincidentally enough (or not) this happened about the time that DEC sold off their 'Alpha' cpu technology and most of the engineering staff who built that 64bit cpu (which was VERY advanced and powerful for its time) subsequently left for other companies. AMD, as you might guess, hired a few of these excellent & experienced minds. The even pioneered the SSE style accellerations by implementing 3dNow!, which Intel basically copied & extended to create SSE/1.
Not only did the original Athlon fare well against the p3 but Intel's 'next generation' architecture in the first few P4's to market lost to both the Athlon *and* their own 1Ghz+ P3's. Intel was attempting for the first time to bring some of their research from the 'high end' Itanium (or Itanic as some non-fans prefer to call it) and in the beginning of this they let it be known that while initially the technology seemed tepid it should allow them to scale nicely as the P4 clocks up. They *also* let it be known that their first P4 socket, the 'socket 423' would be soon replaced by 'socket 478' which should run for a long time after. So many people ran out and bought the P4 and were disappointed with what they recieved compared to their peers who bought AMD Athlons.
Once the P4 passed the 2Ghz mark it started pulling ahead of Athlons in many tasks (except 87fpu, ie, older plugins). This was especially the case due to memory throughput. By the time the P4 was getting over to 2.4Ghz 'hump' (which required a die-shrink) AMD countered with their 'Athlon XP' line which brought a lot of improvements (full SSE/1 support and smaller size/cooler) and again AMD brought cheers from the 'little man'.
Now we are nearing 'modern times' (relatively speaking) and Intel has not only had to rev the core and give their FSB (front side bus) a boost to 800mhz to keep their latest P4 core (the northwood) up with AMD's Althlon XP, but AMD has managed a sidestep and brought 64bits to the desktop with an even newer core found in the Opteron/Fx/Amd64 line. They also integrated a memory controller and added support for SSE/2 (which Intel added to the original P4). Most of the performance boost that they have gotten in modern applications actually stems from the integrated memory controller's effect on memory access and the integration of SSE/2 and other 'enhancements' (larger caches etc). The 64bit 'extensions' affect ONLY how much total RAM can be addressed by the chip (ie, accessed and should not be confused with the precisions of calculations taking place in the cpu (32/64 or 80bit (x87fpu is actually 80bit)).
Intel has been a bit slower in bringing the Prescott to market than they would have liked, because in order to get the next die shrink for it they have had to introduce a few new things. First they had to pioneer a 'strained silicon' process to reduce the problem of electrons leaking into nearby pathways, so that they can keep operating voltage within reasonable engineering levels. They are also introducing a new socket yet again (and there are probably other 'features' present in the new cores which will not be enabled until they can be 'fixed' and enabled in later generations to improve performance).
Now with where we are NOW, Prescott is 'almost' here and AMD has had since last summer to improve their Opteron/Fx/AMD64 core. AMD performs quite admirably when compared with Intel across the entire x86 board (Opteron vs. Xeon, Fx vs. P4EE, AMD64 vs. P4) and Intel's next generation seems to be a losing deal compared to not only AMD but also their own current P4's!
Sense that we've come full circle?
The 'Prescott' core should once again 'scale quite nicely' over the next year or year and a half (not to mention that it has 1Mb L2 cache), and once it hits the 'sweet spot' of its performance curve I'm sure Intel is confidant that it will perform up to par with the AMD stuff. It will be interesting to see what happens too because Intel is competing with AMD's dual advantage of not just 64 bit extensions (probably already in Prescott but disabled) but also AMD's integrated memory controller which Intel has no current plans for. However where Intel REALLY has their work cut out for them is with the upcoming socket change. "Socket T" uses a ****** type of connection, which means that rather than the cpu having pins that extend down from it, it simply has very VERY small bumps rather like ball bearings and the socket has very tiny spring loaded pins which extend up to contact the socket. This allows Intel to advance the overall 'process' which is used to create their cpu's and reduce the amount of copper between the cpu itself and the 'packaging' which the small core sits upon (the green plate in modern AMD & Intel cpu's, (Athlons, durons and P3's were brown).
Understand that the 'spring loaded pins' which extend up from the socket are a potential achilles heel as it may be you can only change cpu's a few times before affecting the springs & contacts to the point where the motherboard becomes useless or buggy. This doesn't affect people who buy a system and upgrade it once at the most, but it is possible that people who love to 'tweak' their hardware will revolt to AMD's side en masse if this turns out to be the case. Or it may just be that buying a new motherboard becomes part of that 'upgrading' and tweaking. Or it may be that the engineering problems aren't as bad as it would seem. Noone knows yet because Intel certainly isn't telling.
Now to summarize the last 2 paragraphs real quick, basically Intel is going to introduce the Prescott at a performance disadvantage on socket 478 and move it to Socket T, a platform change that may have a potential achilles heel. It may not however, and Socket T motherboards will have advantages: initally many boards will support PCI Express, and eventually both DDR-2 and a 1066Mhz bus will emerge as well as universal support of PCI Express and a total abandonment of the ISA bus (which still exists to support the serial/parallel & ps/2 connectors). If the Socket T proves to be up to the task, then by the time that the Prescott is at 1066Mhz the 4.8Ghz speed should overcome the deep pipeline and low IPC issues.
Now as for AMD, at the time that they announced their 'Hammer' core (used in Opteron/Fx/64) they also announced a 4th variation: dual core cpu's. So about the time Intel's Prescott peaks AMD may have a tasty treat of their own and bring true multiprocessor multithreading to the masses. Of course Intel will have Nehalem coming soon after and they have hinted that what we now know as HT (hypertransport) may evolve along similar lines as well.
In the long run the IA/32 platform, even with AMD's 64bit extended lease on life may follow Intel's plan and IA/64 (Itanium) or something like it may trickle down to the masses. IA/32 has other shortcomings that the 64bit extensions don't address (such as IRQ!) and unlike AMD Intel really has the R&D financing to play a 10-year game with that one. We'll see....
Incidentally I may be off a few degrees here & there...feel free to correct me
As for Prescott, Go for the Northwood and skip Prescott.
For those who care for my usual lengthy explanation, let's reflect back on the overall situation.
All AMD & Intel fanboys step down for a second, this is a game of leapfrog not underdog.
AMD's recent history shows a lot of success in taking existing architecture (x86) and building upon it, for instance their first release of the Athlon was quite powerful compared to the p3. Coincidentally enough (or not) this happened about the time that DEC sold off their 'Alpha' cpu technology and most of the engineering staff who built that 64bit cpu (which was VERY advanced and powerful for its time) subsequently left for other companies. AMD, as you might guess, hired a few of these excellent & experienced minds. The even pioneered the SSE style accellerations by implementing 3dNow!, which Intel basically copied & extended to create SSE/1.
Not only did the original Athlon fare well against the p3 but Intel's 'next generation' architecture in the first few P4's to market lost to both the Athlon *and* their own 1Ghz+ P3's. Intel was attempting for the first time to bring some of their research from the 'high end' Itanium (or Itanic as some non-fans prefer to call it) and in the beginning of this they let it be known that while initially the technology seemed tepid it should allow them to scale nicely as the P4 clocks up. They *also* let it be known that their first P4 socket, the 'socket 423' would be soon replaced by 'socket 478' which should run for a long time after. So many people ran out and bought the P4 and were disappointed with what they recieved compared to their peers who bought AMD Athlons.
Once the P4 passed the 2Ghz mark it started pulling ahead of Athlons in many tasks (except 87fpu, ie, older plugins). This was especially the case due to memory throughput. By the time the P4 was getting over to 2.4Ghz 'hump' (which required a die-shrink) AMD countered with their 'Athlon XP' line which brought a lot of improvements (full SSE/1 support and smaller size/cooler) and again AMD brought cheers from the 'little man'.
Now we are nearing 'modern times' (relatively speaking) and Intel has not only had to rev the core and give their FSB (front side bus) a boost to 800mhz to keep their latest P4 core (the northwood) up with AMD's Althlon XP, but AMD has managed a sidestep and brought 64bits to the desktop with an even newer core found in the Opteron/Fx/Amd64 line. They also integrated a memory controller and added support for SSE/2 (which Intel added to the original P4). Most of the performance boost that they have gotten in modern applications actually stems from the integrated memory controller's effect on memory access and the integration of SSE/2 and other 'enhancements' (larger caches etc). The 64bit 'extensions' affect ONLY how much total RAM can be addressed by the chip (ie, accessed and should not be confused with the precisions of calculations taking place in the cpu (32/64 or 80bit (x87fpu is actually 80bit)).
Intel has been a bit slower in bringing the Prescott to market than they would have liked, because in order to get the next die shrink for it they have had to introduce a few new things. First they had to pioneer a 'strained silicon' process to reduce the problem of electrons leaking into nearby pathways, so that they can keep operating voltage within reasonable engineering levels. They are also introducing a new socket yet again (and there are probably other 'features' present in the new cores which will not be enabled until they can be 'fixed' and enabled in later generations to improve performance).
Now with where we are NOW, Prescott is 'almost' here and AMD has had since last summer to improve their Opteron/Fx/AMD64 core. AMD performs quite admirably when compared with Intel across the entire x86 board (Opteron vs. Xeon, Fx vs. P4EE, AMD64 vs. P4) and Intel's next generation seems to be a losing deal compared to not only AMD but also their own current P4's!
Sense that we've come full circle?
The 'Prescott' core should once again 'scale quite nicely' over the next year or year and a half (not to mention that it has 1Mb L2 cache), and once it hits the 'sweet spot' of its performance curve I'm sure Intel is confidant that it will perform up to par with the AMD stuff. It will be interesting to see what happens too because Intel is competing with AMD's dual advantage of not just 64 bit extensions (probably already in Prescott but disabled) but also AMD's integrated memory controller which Intel has no current plans for. However where Intel REALLY has their work cut out for them is with the upcoming socket change. "Socket T" uses a ****** type of connection, which means that rather than the cpu having pins that extend down from it, it simply has very VERY small bumps rather like ball bearings and the socket has very tiny spring loaded pins which extend up to contact the socket. This allows Intel to advance the overall 'process' which is used to create their cpu's and reduce the amount of copper between the cpu itself and the 'packaging' which the small core sits upon (the green plate in modern AMD & Intel cpu's, (Athlons, durons and P3's were brown).
Understand that the 'spring loaded pins' which extend up from the socket are a potential achilles heel as it may be you can only change cpu's a few times before affecting the springs & contacts to the point where the motherboard becomes useless or buggy. This doesn't affect people who buy a system and upgrade it once at the most, but it is possible that people who love to 'tweak' their hardware will revolt to AMD's side en masse if this turns out to be the case. Or it may just be that buying a new motherboard becomes part of that 'upgrading' and tweaking. Or it may be that the engineering problems aren't as bad as it would seem. Noone knows yet because Intel certainly isn't telling.
Now to summarize the last 2 paragraphs real quick, basically Intel is going to introduce the Prescott at a performance disadvantage on socket 478 and move it to Socket T, a platform change that may have a potential achilles heel. It may not however, and Socket T motherboards will have advantages: initally many boards will support PCI Express, and eventually both DDR-2 and a 1066Mhz bus will emerge as well as universal support of PCI Express and a total abandonment of the ISA bus (which still exists to support the serial/parallel & ps/2 connectors). If the Socket T proves to be up to the task, then by the time that the Prescott is at 1066Mhz the 4.8Ghz speed should overcome the deep pipeline and low IPC issues.
Now as for AMD, at the time that they announced their 'Hammer' core (used in Opteron/Fx/64) they also announced a 4th variation: dual core cpu's. So about the time Intel's Prescott peaks AMD may have a tasty treat of their own and bring true multiprocessor multithreading to the masses. Of course Intel will have Nehalem coming soon after and they have hinted that what we now know as HT (hypertransport) may evolve along similar lines as well.
In the long run the IA/32 platform, even with AMD's 64bit extended lease on life may follow Intel's plan and IA/64 (Itanium) or something like it may trickle down to the masses. IA/32 has other shortcomings that the 64bit extensions don't address (such as IRQ!) and unlike AMD Intel really has the R&D financing to play a 10-year game with that one. We'll see....
Incidentally I may be off a few degrees here & there...feel free to correct me

@valis
Very good round-up, but there's one fatal flaw: AMD64 in fact speeds up calculations due to the ability to work with 64bit integer arithemtic in a single cycle - 32bit CPU's need at least 3 cycles to calculate 64bit integers. It also has twice the registers, and all registers are 64bit.
The problem is, you'll need a 64bit OS and 64bit capable software (software needs to be heavily modified to take full advantage, recompiling alone won't do the trick). Most applications currently use float values for complex calculations because that's faster compared to integer calculation, but the calculations are not very precise due to rounding errors.
I made extensive tests with 64bit integer math on Linux/ AMD64, and you get either more precise results, or a better performance, or both, using 64bit integer math.
POVRay, for example, renders at about the same speed, but the images are of better quality due to the higher precision, ffmpeg (video encoder/ decoder) is _very_ fast using the experimental AMD64 assembler available now, and OpenDE (physics engine) is faster and more precise using 64bit integers compared to standard SSE2.
Very good round-up, but there's one fatal flaw: AMD64 in fact speeds up calculations due to the ability to work with 64bit integer arithemtic in a single cycle - 32bit CPU's need at least 3 cycles to calculate 64bit integers. It also has twice the registers, and all registers are 64bit.
The problem is, you'll need a 64bit OS and 64bit capable software (software needs to be heavily modified to take full advantage, recompiling alone won't do the trick). Most applications currently use float values for complex calculations because that's faster compared to integer calculation, but the calculations are not very precise due to rounding errors.
I made extensive tests with 64bit integer math on Linux/ AMD64, and you get either more precise results, or a better performance, or both, using 64bit integer math.
POVRay, for example, renders at about the same speed, but the images are of better quality due to the higher precision, ffmpeg (video encoder/ decoder) is _very_ fast using the experimental AMD64 assembler available now, and OpenDE (physics engine) is faster and more precise using 64bit integers compared to standard SSE2.
You're correct about a 32bit cpu calculating 64bit integer operations when they are pure GPU operations. Since a 32-bit chip requires on software support to split the data into 32bits, process it separately, and recombine it into the result its slower than an AMD "Hammer" core can which can process it 'pure' in 1 cycle. Another way to say this is that since both integer and address calculations are done through the same ALU's (@64bits wide) you get more digital 'dynamic headroom. Ie, a 32-bit GPR (general purpose register) handles integers up to 4.3e9, a 64-bit GPR = 1.8e19 when doing calculations.
I would think programmers of 32bit apps would use only use 32bit precision when dealing with 'pure' GPU integers, since there are other higher precision integer units available (SSE/1 mainly). So for an app to prefer 64bit integer this would require that a program be coded for the AMD64 in mind and have to run in a 64-bit 'emulated' mode on 32bit processors (you mentioned a few linux apps that seem to be championing support). For dealing with more than 32bits SSE/1 and SSE/2 (respectively vector accellerated integer and fpu) operations are both 128bits precise and use different registers. These ALSO definately require specific support and don't operate on 'pure' 64bit integer data by magic, but many apps have migrated towards using these registers especially since the recent optimizations in intel's compiler give readier access to their use.
There are also tradeoffs to having datapaths that are twice as wide. It places twice as much burden on the memory subsystems (especially on-die CACHE) when it is used that way (hence AMD's early move to integrating the memory controller to reduce latency).
However due to this fact AMD actually has another thing in its favor I didn't mention: In extending x86's GPU (which is used for both addressing and simple integer ops) to 64 bits AMD took the opportunity increase the number of GPRs and SIMD registers available to 16 of each, compared to their former chips and to all current x86 Intel chips which only have 8 of each. This doubling in registers offers a rather large benefit for the amount of instructions AMD cpu's can have loaded for their pipeline to use. In fact I would expect that the tests with OpenDE to show a speed benefit from this using SSE/1 & SSE/2 ops on AMD64 as well, although there may be additional overhead for SSE that I'm forgetting.
All blather aside, I appreciate the clarification on AMD's side I was really simplifying the issue. For example I mentioned that AMD64 could address 64bits of address space, but the virtual address space is actually 48-bit or about 282 terabytes of virtual space. Also Xeon systems are available with more than 4GB now and that IS a 32bit chip. Intel even supposedly has a fairly simple hack to allow their 32-bit systems to address up to 512GB, but its not implemented currently.
I definately prefer doing music and graphics to programming these days so again I could be completely wrong, feel free to provide feedback again.
I would think programmers of 32bit apps would use only use 32bit precision when dealing with 'pure' GPU integers, since there are other higher precision integer units available (SSE/1 mainly). So for an app to prefer 64bit integer this would require that a program be coded for the AMD64 in mind and have to run in a 64-bit 'emulated' mode on 32bit processors (you mentioned a few linux apps that seem to be championing support). For dealing with more than 32bits SSE/1 and SSE/2 (respectively vector accellerated integer and fpu) operations are both 128bits precise and use different registers. These ALSO definately require specific support and don't operate on 'pure' 64bit integer data by magic, but many apps have migrated towards using these registers especially since the recent optimizations in intel's compiler give readier access to their use.
There are also tradeoffs to having datapaths that are twice as wide. It places twice as much burden on the memory subsystems (especially on-die CACHE) when it is used that way (hence AMD's early move to integrating the memory controller to reduce latency).
However due to this fact AMD actually has another thing in its favor I didn't mention: In extending x86's GPU (which is used for both addressing and simple integer ops) to 64 bits AMD took the opportunity increase the number of GPRs and SIMD registers available to 16 of each, compared to their former chips and to all current x86 Intel chips which only have 8 of each. This doubling in registers offers a rather large benefit for the amount of instructions AMD cpu's can have loaded for their pipeline to use. In fact I would expect that the tests with OpenDE to show a speed benefit from this using SSE/1 & SSE/2 ops on AMD64 as well, although there may be additional overhead for SSE that I'm forgetting.
All blather aside, I appreciate the clarification on AMD's side I was really simplifying the issue. For example I mentioned that AMD64 could address 64bits of address space, but the virtual address space is actually 48-bit or about 282 terabytes of virtual space. Also Xeon systems are available with more than 4GB now and that IS a 32bit chip. Intel even supposedly has a fairly simple hack to allow their 32-bit systems to address up to 512GB, but its not implemented currently.
I definately prefer doing music and graphics to programming these days so again I could be completely wrong, feel free to provide feedback again.

I think that's correct...
AFAIK, every 32bit CPU can use up to 16GB RAM (on Linux, that is - don't know about Windows), using address translation. But this comes with a not-so-slight speed bump, since the CPU has to do the translations...
OpenDE is in fact faster on AMD64 using SSE math too, but you have to trade in a lot of precision. And there are even more nice AMD64 features, like K8 NUMA (currently up to 12.8GB/s memory bandwith), or nearly linear scaling - each additional CPU increases the processing power by 70 to 95 percent.
One of the main problems so far is that there are not that many applications with real 64bit support available. It's better on Linux right now than it will be on Windows shortly after XP64 becomes available, since AMD64 is pretty common on Linux for quite some time now. Also, a lot OSS applications are written with 64bit integer math in mind, for Itanic, Sparc, MIPS and/ or PPC, and now AMD64.
On Linux/ AMD64, the current 64bit speedup mostly comes from the ability to use all registers (a simple recompile does the trick). There are also applications that become slower in 64bit, and the executables are slightly bigger than their 32bit pendants...
But it's getting better every day.
<font size=-1>[ This Message was edited by: wsippel on 2004-04-15 23:18 ]</font>
AFAIK, every 32bit CPU can use up to 16GB RAM (on Linux, that is - don't know about Windows), using address translation. But this comes with a not-so-slight speed bump, since the CPU has to do the translations...
OpenDE is in fact faster on AMD64 using SSE math too, but you have to trade in a lot of precision. And there are even more nice AMD64 features, like K8 NUMA (currently up to 12.8GB/s memory bandwith), or nearly linear scaling - each additional CPU increases the processing power by 70 to 95 percent.
One of the main problems so far is that there are not that many applications with real 64bit support available. It's better on Linux right now than it will be on Windows shortly after XP64 becomes available, since AMD64 is pretty common on Linux for quite some time now. Also, a lot OSS applications are written with 64bit integer math in mind, for Itanic, Sparc, MIPS and/ or PPC, and now AMD64.
On Linux/ AMD64, the current 64bit speedup mostly comes from the ability to use all registers (a simple recompile does the trick). There are also applications that become slower in 64bit, and the executables are slightly bigger than their 32bit pendants...
But it's getting better every day.

<font size=-1>[ This Message was edited by: wsippel on 2004-04-15 23:18 ]</font>
- cannonball
- Posts: 344
- Joined: Wed Sep 26, 2001 4:00 pm
- Location: italia
hi
thanks for this usefull info
just a question
can you give a list of reccomnded pc for
our scopr cards?
motherboard ram cpu and all the other things
for a smoth system
at the moment i have a "old" one
p3733 asusp2bf 786 ram etc
and in the next months i would like make
an upgrade but i don which is now the best
configuration for my 3 cards
luna 2 pulsar2 ppowerpulsar
ale
thanks for this usefull info
just a question
can you give a list of reccomnded pc for
our scopr cards?
motherboard ram cpu and all the other things
for a smoth system
at the moment i have a "old" one
p3733 asusp2bf 786 ram etc
and in the next months i would like make
an upgrade but i don which is now the best
configuration for my 3 cards
luna 2 pulsar2 ppowerpulsar
ale
hi,
have a look @ http://www.hamburg-audio.de ( or http://www.audio-pc.net ) / audio-pc
...site is in german but there are all components listed...
best,
andre
<font size=-1>[ This Message was edited by: AndreD on 2004-04-17 03:54 ]</font>
have a look @ http://www.hamburg-audio.de ( or http://www.audio-pc.net ) / audio-pc
...site is in german but there are all components listed...
best,
andre
<font size=-1>[ This Message was edited by: AndreD on 2004-04-17 03:54 ]</font>