Java – Converts binary representations of integers to ASCII in Java Card

Converts binary representations of integers to ASCII in Java Card… here is a solution to the problem.

Converts binary representations of integers to ASCII in Java Card

I want to convert an integer of any length represented in binary format to ASCII form.

An example is the integer 33023, and the hexadecimal bytes are 0x80ff. I want to represent 0x80ff as ASCII format for 33023, which has a hexadecimal representation of 0x3333303233.

I’m dealing with a Java Card environment that doesn’t recognize string types, so I have to do the conversion manually via binary operations.

What is the most efficient way to solve this problem because the Java Card environment on a 16-bit smart card is very limited.

Solution

This is trickier than you might think because it requires cardinality conversion, and cardinality conversion is performed on the entire number using a large integer algorithm.

This certainly doesn’t mean that we can’t create an efficient implementation of the said large integer arithmetic specifically for this purpose. This is an implementation with zero padding (which is usually required on Java Cards) and does not use additional memory (!). If you want to keep it, you may have to copy the original value of the big endian number – the input value is overwritten. It is highly recommended to put it in RAM.

This code simply divides the bytes by the new base number (decimal is 10) and returns the remainder. The remainder is the next lowest number. Since the input value is now divided, the next remainder is a number one bit higher than the previous one. It keeps dividing and returns the remainder until the value is zero and the calculation is complete.

The tricky part of the algorithm is the inner loop, which divides the value by 10 while returning the remainder using end-of-byte division. It provides a remainder/decimal place per run. This also means that the order of the function is O(n), where n is the number of digits in the result (defining tail division as a single operation). Note that n can be calculated by ceil(bigNumBytes * log_10(256)): the result is also present in the precomputed BCD_SIZE_PER_BYTES table. log_10(256) is, of course, a constant decimal value, higher than 2.408.

This is the final code that is optimized (see editing for different versions):

/**
 * Converts an unsigned big endian value within the buffer to the same value
 * stored using ASCII digits. The ASCII digits may be zero padded, depending
 * on the value within the buffer.
 * <p>
 * <strong>Warning:</strong> this method zeros the value in the buffer that
 * contains the original number. It is strongly recommended that the input
 * value is in fast transient memory as it will be overwritten multiple
 * times - until it is all zero.
 * </p>
 * <p>
 * <strong>Warning:</strong> this method fails if not enough bytes are
 * available in the output BCD buffer while destroying the input buffer.
 * </p>
 * <p>
 * <strong>Warning:</strong> the big endian number can only occupy 16 bytes
 * or less for this implementation.
 * </p>
 * 
 * @param uBigBuf
 *            the buffer containing the unsigned big endian number
 * @param uBigOff
 *            the offset of the unsigned big endian number in the buffer
 * @param uBigLen
 *            the length of the unsigned big endian number in the buffer
 * @param decBuf
 *            the buffer that is to receive the BCD encoded number
 * @param decOff
 *            the offset in the buffer to receive the BCD encoded number
 * @return decLen, the length in the buffer of the received BCD encoded
 *         number
 */
public static short toDecimalASCII(byte[] uBigBuf, short uBigOff,
        short uBigLen, byte[] decBuf, short decOff) {

 variables required to perform long division by 10 over bytes
     possible optimization: reuse remainder for dividend (yuk!)
    short dividend, division, remainder;

 calculate stuff outside of loop
    final short uBigEnd = (short) (uBigOff + uBigLen);
    final short decDigits = BYTES_TO_DECIMAL_SIZE[uBigLen];

 --- basically perform division by 10 in a loop, storing the remainder

 traverse from right (least significant) to the left for the decimals
    for (short decIndex = (short) (decOff + decDigits - 1); decIndex >= decOff; decIndex--) {

 --- the following code performs tail division by 10 over bytes

 clear remainder at the start of the division
        remainder = 0;

 traverse from left (most significant) to the right for the input
        for (short uBigIndex = uBigOff; uBigIndex < uBigEnd; uBigIndex++) {

 get rest of previous result times 256 (bytes are base 256)
            // ... and add next positive byte value
             optimization: doing shift by 8 positions instead of mul.
            dividend = (short) ((remainder << 8) + (uBigBuf[uBigIndex] & 0xFF));

 do the division
            division = (short) (dividend / 10);

 optimization: perform the modular calculation using
            // ... subtraction and multiplication
            // ... instead of calculating the remainder directly
            remainder = (short) (dividend - division * 10);

 store the result in place for the next iteration
            uBigBuf[uBigIndex] = (byte) division;
        }
         the remainder is what we were after
         add '0' value to create ASCII digits
        decBuf[decIndex] = (byte) (remainder + '0');
    }

return decDigits;
}

/*
 * pre-calculated array storing the number of decimal digits for big endian
 * encoded number with len bytes: ceil(len * log_10(256))
 */
private static final byte[] BYTES_TO_DECIMAL_SIZE = { 0, 3, 5, 8, 10, 13,
        15, 17, 20, 22, 25, 27, 29, 32, 34, 37, 39 };

To expand the input size, simply calculate the next decimal size and store it in the table….

Related Problems and Solutions