It looks like you're new here. If you want to get involved, click one of these buttons!
A value of 0 and 1 will play onceThat would be consistent with the way Timer.new(delay, repeatCount) handles infinite loops :
A value of 0 runs the timer infinitelyI believe (need to check again) that this is also the case with GTween (nbLoops = 0 -> infinite loops).
Sound:play(startTime, loops)
Creates a new SoundChannel object to play the sound. By using the retured SoundChannel object, you can stop the sound and monitor the position.
Parameters:
startTime: (number, default = 0) The initial position in milliseconds at which playback should start.
loops: (number, default = 0) Defines the number of times a sound play to the begining before the sound channel stops playback. For example, a value of 0 and 1 will play once, 2 will play twice and so on.
Returns:
A SoundChannel object, which you use to control the sound. This function returns nil if you run out of available sound channels.
Timer.new(delay, repeatCount)What do you think about it?
Creates a new Timer object with the specified delay and repeatCount states.
Parameters:
delay: The time interval between timer events in milliseconds.
repeatCount: (default = 0) The number of repetitions. A value of 0 runs the timer infinitely. If nonzero, the timer runs the specified number of times and then stops.
Likes: plamen
Comments
Likes: SinisterSoft
I don't know where we can check if what you suggest 1) is part of the roadmap and 2) which level of priority was given to it.
I know that you have been asking about it for quite a long time but it's only recently that I have played with the sound API so I don't see the limits yet like you do (probably soon?).
In the meantime, I believe that we can have this little fix (seems quite straightforward) that I suggest and it won't have a big influence on the sound API overhaul.
But what do I know *shrug*
As for the roadmap, it was listed as near future in September on this thread:
http://www.giderosmobile.com/forum/discussion/comment/12969
But as we now know that roadmap isn't really worth paying any attention to, as Atilim has already teased stuff which contradicts that. So, who knows?
From my point of view, if it take less than a certain amount of time (minimum required so it's not a huge investment) and if it can make the experience a little better until the API is improved (between now and march) and if enough people find the request would improve the experience, then why not?
As I said, I don't know anything about the amount of time it would take.
If it's just changing a few "if" conditions then maybe it's worth it and will prevent users to make errors because of the lack of consistency.
But I agree (and Atilim would probably agree, as it is part of the roadmap) : the goal is to improve the API.
It seems that Sound:play(*startTime*, loops) is not taken into account when the file is in *.wav format (always start from 0).
It works fine with *.mp3 files.
My point is let's try to create something that we think should be in that API and let's see where it goes, it's something I'm trying to achieve in GiderosCodingEasy.
So what do you think, is there a need for separate Sounds and SoundChannel classes?
What other functionalities are needed, like fadinOut, fadeIn, etc.
Of course not all can be implemented, as the bug that @ Mells found and panning can't be implemented from lua too, but at least that would be a start.
If @atilim will start working on new Sounds API, it would be much better to have a list of wishes, requirements, etc. Even better a prototype, which users already use
I couldn't even get confirmation from Atilim if Gideros uses hardware decoding of an mp3 file where it's available. I've sene him post that playing background music uses 20% CPU on some devices (there's a thread somewhere where he mentions this) which makes me wonder if it does or not.
As for features: Pausing, panning, separation in the handling of background music and sound files, changing to OpenSL. There's a start. I'm happy if we have to handle fading in and out (presuming that issues I encountered whilst doing it are removed, e.g. buffering old track playing for a split second when starting a new track at full volume etc.) Heck, even pausing I can get around (but it's messy).
Out of everything that's provided in Gideros, sound seems to be one of the most neglected features.
I've reimplemented the Sound API from ground up for the upcoming release. Currently I'm doing tests on it and I'm about to release a new version.
Here are new additions:
1. Sound objects are reused internally. Therefore a code like:
2. Added setPitch/getPitch to SoundChannnel
3. Added pause/resume
4. On iOS, now it's possible to allow iPod music while playing the game. (e.g you can choose Ambient or SoloAmbient as AVAudioSession category)
5. Setting the loop count makes my implementation (really) complicated. I've dropped that implementation and now instead of loop count, you specify as looping or non-looping:
Sound.new("ding.wav"):play(0, 0) -- play once
Sound.new("ding.wav"):play(0, 1) -- play once
Sound.new("ding.wav"):play(0, 2) -- looping
Sound.new("ding.wav"):play(0, 3) -- looping
Sound.new("ding.wav"):play(0, math.huge()) -- looping
@moopf - Gideros uses hardware decoding of an mp3 file where it's available. (on iOS it uses AVAudioPlayer and on Android it uses android.media.MediaPlayer) The %20 CPU is on iPod 2nd gen (although with hardware decoding) and I'm sure it's much more lower with the newer devices.
To support OpenSL, I need to drop Android 2.2 support (%10.3).
Likes: moopf, Mells
Likes: eclisse
Dislikes: PaulH, Dafb, jimlev
@plamen, I was just about to drop Android 2.2 support because I've implemented all sound API from ground up. And after this release, I'll try to drop 2.2 and implement OpenSL as soon as reasonably possible.
Likes: plamen, SinisterSoft
Still, the ability to pan a sound is actually quite useful.
Outside of just sound effects I think it might be useful to look into synchronised loops (music layers), endless looping, or the ability to specify not only a start offset + looping, but start offset + loop length.
In the longer run, some basic audio effects such as EQ and echo may be put to good use for effects. As it is now, to simulate a distant sound requires loading a whole new sound file...
Much as I can see reason to support legacy hardware, I think it should be totally fine to drop some of that legacy in favour of extending the APIs and actually looking forward. By the time the products using those APIs are out, more often than not the hardware has moved another cycle forward.
my 2c.
Here the problem is the available API usually doesn't provide sound effects like reverb, EQ, echo, etc. Although iOS is really strong on sound/audio side, recently with iOS 5 it started supporting these kinds of effects natively. And most of the professional audio libraries (e.g. FMOD, BASS, irrKlang, Miles, Wwise, etc) usually provides licenses per single title/game. Maybe in the future, we can provide Lua bindings for FMOD and allow the users to purchase FMOD licenses by themselves.
For start offset + loop length, I'm planning to introduce a class SoundRegion (similar to TextureRegion) like:
Kidding, still it's quite a nice feature, cause you may want to change a melody to more dramatic ones on actions and to more calm parts on easy moments of the game.
However if there would be some way to create/play MIDI type sounds, create notes from octaves and values, that would be very useful. Something like the AY-3-8912 Chip (I guess it was also used in Amstrad machines) which can use ascii strings to play tunes or like Codea does with strings.
Author of Learn Lua for iOS Game Development from Apress ( http://www.apress.com/9781430246626 )
Cool Vizify Profile at https://www.vizify.com/oz-apps
For instance: There's an Event.COMPLETE for a sound file that has finished. What is the equivalent for a loop that has just reached it's loop point? That would give us some control over layered or looped music (say add a layer to the already running music, or crossfade to another section, in both cases respecting the internal timing of the music)
With MP3 specifically there's also the issue of fine control over gapless looping soundchannels - http://www.compuphase.com/mp3/mp3loops.htm (scroll to Part 2). From what I can hear Gideros too has the gap at the start/end of a looped mp3 (expected).
Is AAC/M4A out of bounds simply because of Android? (specs on seamless looping AAC natively here http://developer.apple.com/library/mac/#qa/qa1636/_index.html )
As far as I could see, the sound functions operate in milliseconds? That's fairly imprecise compared to samples. Is there a reason for it or is it just with the intention of being easy to understand?
In my version (2012.09), it seems looping an MP3 only respects the start point the first time the sound is played. Also the "complete" event only fires once the number of loops is over (what I spoke about above).
So to recap, better control over playback (start, end, number of loops, possibly seamless looping), an event at the end of a loop.
Likes: OZApps
On Desktop player, I'm using mpg123 and I've built mpg123 library with gapless mode enabled. But I'm not sure if it works as intended or not (it seems it doesn't). And although AAC/M4A is out of options, it's possible to add Ogg Vorbis support (which I understand allows gapless looping)
> the sound functions operate in milliseconds?
Yes. But I can enable floating point miliseconds instead of integer so that you can obtain sample level precision. btw, is sample level precision really needed in real life? isn't miliseconds precise enough?
To have better control on playback, one other option can be adding sync points to the sound channel. And when the playback encounters a sync point, an event is dispatched. But in Gideros nearly all events are queued and dispatched later. Therefore a sync point event may have a delay like 1000/60 ~= 16 miliseconds. I don't know if this limitation makes sync points useless or not.
I think sync points might actually be of great benefit, even if events arrive a bit later. I'd move the sync point earlier then see the results that would produce. Again it would be down to the soundchannel/soundregion's ability to handle moving loop points, especially without interrupting the sound if that's still playing..
Ogg is a great option as well, as far as I know a lot of game devs use it. It also sounds better than mp3 at lower frame rates. Its transients are better too. Only thing I'm unsure about is the capability of the device to decode it in hardware... ?
Maybe AAC/M4A can be supported with the help of Android's open source AAC library; http://sourceforge.net/mailarchive/message.php?msg_id=29526038
Practically any move in a similar direction would be of great benefit at this stage...