Page 2 of 3
Posted: Fri Jan 20, 2006 7:07 am
by ARCADIOS
forget 64bit.
we need 128bit systems. NOW
i whouldn't go to upgrade my 478 mobo.
i can make music here. more importand is hardware music equipment.
ok. and lets say that we have the 64bit system here.
with what motherboard? which one will give you 32GB of ram? which ram????
right now i have ic7g with which goes up to 4GB. windows xp go that high.
is there a motherboard that will let vista work at the 32GB maximum that 64bit os gives?
till this time i think that even vista will have changed.
so, it will be very disappointing to know that you will need to change many times motherboards(and cpu's) till the one that runs 64bit os fully!
Posted: Fri Jan 20, 2006 12:31 pm
by dubcotics
hi to all,
don't shrarks need be 64 bit in order to take advantage of the 64 bit drivers, afaik
sharks are 32 bit + we're only starting to record @ 24 bit on a daily basis, so why all this fuss about 64 bit. (correct me if I'm wrong)
Should cream turn up with a new device, it will definitely have to be USB or FIREWIRE, dunno why people still think about pci express etc...
I don't expect and don't want creamware to release any pci card period.
regards
Posted: Fri Jan 20, 2006 2:03 pm
by braincell
USB and Firewire suffer from Jitters.
On 2006-01-20 12:31, dubcotics wrote:
hi to all,
don't shrarks need be 64 bit in order to take advantage of the 64 bit drivers, afaik
sharks are 32 bit + we're only starting to record @ 24 bit on a daily basis, so why all this fuss about 64 bit. (correct me if I'm wrong)
Should cream turn up with a new device, it will definitely have to be USB or FIREWIRE, dunno why people still think about pci express etc...
I don't expect and don't want creamware to release any pci card period.
regards
Posted: Fri Jan 20, 2006 2:44 pm
by astroman
well, this so-called driver isn't an isolated piece of software.
I would expect that the GUI would have to be re-compiled at least.
Didn't find much in 64bit context on the wxwidgets site (only quickly browsing, tho), so I may have overlooked something obvious.
Imho there are 2 important points about 64bit
#1 has been mentioned numerous times - it's the marketing no-brainer adressing the average Joe that he gets something 'twice as big' for nearly the same price - if he buys a new system. As we all know that's an offer one can't refuse...
#2 is the complete replacement of the floating point unit, which would make compilers much simpler and spare a ton of overhead in current 'mode-switching' of the CPU.
There should be a noticable advantage in context of execution time, but on the other hand a sacrifice in precision (no 80-bit float) on certain processings.
Intel and M$soft are for obvious reasons highly interested to drag things in this direction.
Then there will be the question how much performance the alignement of 'not so big'-numbers will demand...
Anyway, all those fairy tales about advantages of huge memory chunks they currently tell are just that - nonsense and irrelevant.
Don't forget WHO is telling those stories - the company in question doesn't even have one single product in it's portfolio, that they entirely developed on their own

On what do they found their claim for competence ?
Don't get me wrong on the number of bits - I'm not against a few more...
But there are better locations to look for, as has been mentioned - GPUs offer 128 and even 256 immediately on a really fast bus, which is not messed by a supplier of a so called 'operating system'
cheers, Tom
Posted: Fri Jan 20, 2006 11:57 pm
by valis
On 2006-01-20 14:44, astroman wrote:
well, this so-called driver isn't an isolated piece of software.
I would expect that the GUI would have to be re-compiled at least.
Didn't find much in 64bit context on the wxwidgets site (only quickly browsing, tho), so I may have overlooked something obvious.
Imho there are 2 important points about 64bit
#1 has been mentioned numerous times - it's the marketing no-brainer adressing the average Joe that he gets something 'twice as big' for nearly the same price - if he buys a new system. As we all know that's an offer one can't refuse...
#2 is the complete replacement of the floating point unit, which would make compilers much simpler and spare a ton of overhead in current 'mode-switching' of the CPU.
There should be a noticable advantage in context of execution time, but on the other hand a sacrifice in precision (no 80-bit float) on certain processings.
I believe SSE2 provides 64-bit "floating" calculations which have largely superceded the 80-bit x87 fpu calculations in many apps since its easier for the cpu to use the vector extensions. Older G4's with "dual altivecs" and older Athlon cores with 2 x87 fpu's (before they added SSE2 and now SSE3, not sure if Athlons still have 2 x87 fpus') would show up on benchmarks so much more 'powerful' than a P4 core since it only had a single SSE2 fpu and a single x87 fpu. To be honest I'm not really sure what SSE3 added, and the continuing level of 'RISC' like abstraction in modern cores compounds things a bit.
Also, it seems to me that '64bit' x86 cpu's don't gain in fpu power due to the '64bit-ness' they gain in memory access and INTEGER calculations since the registers are integer registers. This allows either 2 32bit INT calculations to be pipelined together or 1 64bit calculation handled in less cpu time, and of course allows access to a significantly larger mem space.
<font size=-1>[ This Message was edited by: valis on 2006-01-20 23:58 ]</font>
Posted: Sat Jan 21, 2006 6:48 am
by darkrezin
Robi - it's all about market share and common sense.
No-one but a handful of nerds are running Windows 64 bit for serious apps.
If you have paid work to do, are you really going to beta-test Windows 64 bit and all the associated new drivers/software?
Therefore why do you think it should be a priority for CW to make Win64 drivers? Vista is only around the corner, why pay for driver development twice? I work in software and basically I can tell you it's economically stupid to do the same work twice.
Stop listening to marketing.
Posted: Sat Jan 21, 2006 2:33 pm
by valis
On 2006-01-21 06:48, darkrezin wrote:
Robi - it's all about market share and common sense.
No-one but a handful of nerds are running Windows 64 bit for serious apps.
If you have paid work to do, are you really going to beta-test Windows 64 bit and all the associated new drivers/software?
Therefore why do you think it should be a priority for CW to make Win64 drivers? Vista is only around the corner, why pay for driver development twice? I work in software and basically I can tell you it's economically stupid to do the same work twice.
Stop listening to marketing.
I still remember when XP was released many many audio users were still running their (tweaked and stable) win98 boxes and contemplating finally upgrading to win2000.
Posted: Sat Jan 21, 2006 4:29 pm
by Nestor
Beyond the technological talk going on, which I find very interesting, there is the sound which you will agree, is much more important because this is the aim and reason to switching to more sophisticated systems with higher bit rate possibilities, isn’t it?
When I started using tape recorders to record a rock group I had 20 years ago, the sound was quite nice we thought.
Then came the CD quality, and I was impressed by the detail, as you could hear “everything” from a performance for the first time, and so the enjoyment turned to be more intense for the listener.
When the first discussions about 24 versus 16 bits started in this forum, I never thought it would help much to switch to a higher bit rate. I was wrong in a way. I did switch, and the difference was quite good. Now, the jump in this occasion could not be compared with the one made from tape recorders to good quality digital recorders at 16 bits.
When sequencers started to allow us recording at 32 bit float, I thought it would not make too much of a difference, and it does not really, essentially… it sounds better yes, but it’s not a big jump. I like the “particularity” of the sound at this high bit rate, rather than perceiving everything as sounding much clearer.
Today, with 32 bits float, I fell more than comfortable, and I have the uttermost enjoyable experience. I don’t feel the need for more.
If 64 bits brings objective and truly important benefits to the use of computers in general, I would think about it, otherwise, for some more headroom, I am certainly not going to bite. If 64 bits were to give me an important improvement for the whole computing experience, predominantly in my multimedia jobs, I could think about it.
Conclusion: to my understanding, perhaps 1% of the 3000 people gathered here in Planet Z may need it, for extremely professional works: Hollywood like works perhaps, otherwise it really is falling into a unhappy waste of money and time.
Posted: Sun Jan 22, 2006 3:08 am
by astroman
On 2006-01-21 16:29, Nestor wrote:
...When the first discussions about 24 versus 16 bits started in this forum, I never thought it would help much to switch to a higher bit rate. I was wrong in a way. I did switch, and the difference was quite good. ...
well, there is one big problem in objectivity:
the differences in sound perceiption with top of the line converters, regardless if 16,18,20 or 24 bit are quite small and to a high degree not even defined by the converter itself.
The sourrounding electronics with analog stages, clock stability and filters have a significant influence.
Hifi geeks like to mod their CD players (for example) by replacing $3 opamps by 'better' types costing upto 10 times as much.
We all know that a studio clock immediately improves the output result of a Scope card - and that card isn't bad at all.
So it's not the bitrate itself, but what's made of it
I'm definetely underpowered when it comes to details of DSP math, but I know for sure that these calculations contain a ton of loop and trigonometric functions, often combined.
Under these preconditions the 'bit precision' of calculations is rather irrelevant.
64 bit will NOT offer any improve in this context by just using 'bigger' numbers.
In fact those numbers become THAT big that it's hard to believe there should be no improvement.
Yet it's in the very nature of that type of math that only a few iterations with a very, very tiny error lead to almost random results.
Imho that's the reason why a 'specialized' DSP math (as provided by Analog Devices in our case) will ALWAYS yield better results than a general purpose PC lib.
This is far from a 'defensive position' regarding literally outdated gear - any native plugin POTENTIONALLY could sound identical to it's Scope counterpart (if played back via the same converter)

... if the same quality of math implementation would exist in a X86/87 library, which obviously isn't the case.
Since this market segment is fairly small there won't be much changes to be expected on the PC side.
Analog Devices on the other hand CAN AFFORD a huge developement team of specialists because it's their core business.
cheers, Tom
Posted: Sun Jan 22, 2006 11:13 am
by Nestor
What an interesting reading Astro, you opend my mind to details I had not taken into account, great
But finally, I am glad reading in your description that we in fact, don't really need 64 bits. At least, not for a long while.
Posted: Sun Jan 22, 2006 3:19 pm
by at0m
Aren't we confusing two different things, 64-bit processing and 64-bit audio? One is the language the CPU/applications use internally, the other being audio dynamic range...
I'd be stupid not to agree that 64-bit CPU's and higher clock speeds do increase performance. But for my audio i don't need more, because 24bits (8x16) is enough to record with, and 32bits (16x16) enough to process it. This doesn't mean I wouldn't want my CPU to process 2 32bits signals in one clock cycle.
Hope I didn't add to the confusion

Posted: Mon Jan 23, 2006 1:54 am
by astroman
yes, it's import to distinguish adressing mode and and data precision.
I referred to the math precision part only and it's more than likely that an 'increased precision argument will show up in the ads (or already did according to Stardust's links).
But for precision's sake any attempt IS bound to fail regardless of the initial number of bits used - it just happens later.
I got across this in the chapter
Chaos wipes out every Computer from the book 'Chaos and Fractals' (Springer Publishing).
Quadratic equations with feedback aren't uncommon in audio processing afaik, so their 'experiments' even some real world appeal.
The first one was stunning, yet understandable to a degree, as the equation r+rp(1-p) was iterated on 2 scientific calculators with different numbers of digits (10 versus 12).
... we noted that the tiny little deviation we noticed in the 10th decimal for the 6th iterate has migrated through all decimal places, i.e. after 40 iterations has been amplified by a factor of 10^10
the next 'experiment' was introduced with
...this is still not the end of the story. Things are even wilder as we've seen so far...
they run 2 versions of the equation above on ONE calculator i.e. p+rp(1-p) versus (1+r)p-rp^2
as you might have already guessed the results start to deviate at the 12 iteration...
the book has (of course) a different intention than to deal with math precision, but I found it really illustrative about what's hidden behind the (sometimes not so) obvious

it's a principle in this type of and it will strike on a Casio pocket calculator as well as on multi-million $ supercomputer.
Since it's unavoidable one has to be careful in choosing the proper equations AND their implementation.
This is also reflected in one of my favourite quotes
'...it's not just about picking a mathematically correct algorithm - it's about picking one that sounds good...(C. Kemper, designer of the Access Virus)'
cheers, Tom
Posted: Mon Jan 23, 2006 2:36 am
by Shroomz~>
Posted: Mon Jan 23, 2006 2:27 pm
by wayne
Oh Shroomz, posting while asleep again

Posted: Sat Jan 28, 2006 10:14 am
by thomashenrydavies
On 2006-01-20 14:03, braincell wrote:
USB and Firewire suffer from Jitters.
I don't think you quite understand jitter and its implications - but firewire/USB audio cards do not suffer from jitter by virtue of being firewire/usb. If an audio device has a good clock then doesn't matter whether it is firewire/usb/PCI.
Posted: Sat Jan 28, 2006 10:20 am
by thomashenrydavies
Yet it's in the very nature of that type of math that only a few iterations with a very, very tiny error lead to almost random results.
Imho that's the reason why a 'specialized' DSP math (as provided by Analog Devices in our case) will ALWAYS yield better results than a general purpose PC lib.
This is far from a 'defensive position' regarding literally outdated gear - any native plugin POTENTIONALLY could sound identical to it's Scope counterpart (if played back via the same converter)

... if the same quality of math implementation would exist in a X86/87 library, which obviously isn't the case.
This isn't true. Floating point implementation is a well defined IEEE standard that intel processors certainly DO adhere to. Floating point maths on a CPU is just as good as it is on a DSP.
Posted: Sat Jan 28, 2006 11:05 am
by dehuszar
Well the big problem with USB and Firewire is that, as SFP and subsequent plugs have been designed to use the system memory of the host computer, you need to have a really fast and wide path through the system bus.
Many of us who have played with using the Magma chassis to make our CWA systems portable have come to discover that even using the included high-speed SCSI cable (@133MB/sec), information which is being processed in real-time can't compute in time due to a system traffic jam. The SHARC DSPs, while amazing in their own right, are designed to run in real-time, unlike our host processors. If a process takes to long, SFP essentially craps the bed. Goodnight Gracie.
In order to be able to use a CWA system over such a narrow and slow bus (remember 480Mbits means... 8 bits to a byte, 480/8 = 60MB/sec burst speed, not sustained. Not so fast is it?) you'd have to not only throw a whole mess of memory on each board, or use SHARCS with a nice amount of RAM per SHARC, but you'd have to rewrite ALL the software which uses the system memory to use the on-board memory. Hint: most plugs access the system memory for SOMETHING.
Not only that, but if you prevent access to system memory, then you're forced to sacrifice some of your overall SHARC resources to running SFP as they did for NOAH.
Having said that, if they DO find away to handle all the work involved in re-designing all of that stuff, my vote is for FireWire 800, as you'd essentially 1.5x the amount of tracks you can push in each direction and will maintain better sustained speeds than USB2.
Posted: Sat Jan 28, 2006 11:51 am
by symbiote
On 2006-01-21 07:19, stardust wrote:
I checked steinberg's website.
There is no Cubase 64 bit announced.
Yep, what was released was only the VST 2.4 SDK and specifications -- which as far as I can see, only adds a function to be called when running in 64bit mode (processDoubleReplacing()) instead of the standard 32bit processReplacing() --, and not an actual 64bit Cubase (although I'm sure they're working on that too.)
Posted: Sat Jan 28, 2006 3:57 pm
by astroman
On 2006-01-28 10:20, thomashenrydavies wrote:
...a 'specialized' DSP math (as provided by Analog Devices in our case) will ALWAYS yield better results than a general purpose PC lib.
... any native plugin POTENTIONALLY could sound identical to it's Scope counterpart
... if the same quality of math implementation would exist in a X86/87 library, which obviously isn't the case.
This isn't true. Floating point implementation is a well defined IEEE standard that intel processors certainly DO adhere to. Floating point maths on a CPU is just as good as it is on a DSP.
WHAT isn't true ?
Didn't my quote above explicetely include that a CPU based plugin could sound as good (or bad) as one based on DSP code ?
Would you expect better audio processing from a general purpose chip supplier than from a specialist who (literally) releases millions of high end consumer devices per year ?
Quadratic equations ARE an important part of DSP math and (as mentioned) there's no fundamental difference between a $10 pocket calculator and a $10 million supercomputer in this context - ANY system with a finite number of digits WILL fail precisionwise.
it is completely irrelevant if a SINGLE math operation is defined correctly (according to whatever standards).
This is about iterated functions, about looped code, and even the choice in which form the algorithm is represented has an influence on the result (as mentioned).
Since errors are unavoidable anyway, I simply assume that the 'specialists' will find ways to control the direction in which the result developes, while the 'standard version' will just run out of control. But of course that's speculative...

Regarding the Sharc libraries (afaik) these are not pure math operations, but also include basic building blocks for functional processing (like a filter for example).
Not that I want to make too much publicity for 'Chaos and Fractales', but the chapter quoted (on the previous page) should at least be read once by anyone who's interested in this type of programming
cheers, Tom
<font size=-1>[ This Message was edited by: astroman on 2006-01-28 16:09 ]</font>
Posted: Sat Jan 28, 2006 5:04 pm
by darkrezin
I can't claim to have a proper understanding of the subject, but I was under the impression that integer-based maths like that used by DSP cards like Scope and Pro Tools TDM is not prone to inaccuracy like floating point maths (i.e. no rounding issues)?