Part of the latency from memory read or write operations is for data to be
input to or output from the data latches of the memory via an I/O bus.
Methods and circuitry are present for improving performance in
non-volatile memory devices by allowing the memory to perform some of
these data caching and transfer operations in the background while the
memory core is busy with a read operation. A scheme for caching read data
is implemented so that even when the read for a current page on a current
wordline must be preceded by a prerequisite read of data on an adjacent
wordline, the prerequisite read along with any I/O access is preemptively
done in the cycle for reading a previous page so that the current read
can be performed while the previously read page is busy with the I/O
access.