In a transform encoder for audio data, encoded data in the form of mantissas,
exponents and coupling data is packed into fixed length frames in an output bitstream.
The fields within the frame for carrying the different forms of data are variable
in length, and apace within the frame must be allocated between them to fit all
of the required information into the frame. The space required by the various data
types depends on certain encoding parameters, which are calculated for a particular
frame before the data is encoded, thus ensuring that the encoded data will fit
into the frame before the computationally expensive encoding process is carried
out. Information in relation to, for example, transform length, coupling parameters
and exponent strategy are determined, which allows the space required for the coupling
and exponent data to be calculated. The mantissa encoding parameters can then be
iteratively determined so that the encoded mantissas will fit into the frame with
the other encoded data. The determined encoding parameters are stored and the audio
data is encoded according to those parameters after it has been determined that
the encoded data will fit into the frame.