public class SplitFileFetcherStorage
extends java.lang.Object
Stores the state for a SplitFileFetcher, persisted to a LockableRandomAccessBuffer (i.e. a single random access file), but with most of the metadata in memory. The data, and the larger metadata such as the full keys, are read from disk when needed, and persisted to disk.
On disk format goals:
Overall on-disk structure: BLOCK STORAGE: Decoded data, one segment at a time (the last segment's size is rounded up to a whole block). Within each segment, the number of blocks is equal to the number of data blocks (plus the number of cross-check blocks if there are cross-check blocks), but they are not necessarily actually data blocks (they may be check blocks), and they may not be in the correct order. When we FEC decode, we read in the blocks, construct the CHKs to see what keys they belong to, check that we still have enough valid keys (update the metadata if the counts were wrong), do the decode, and write the data blocks back in the correct order; the segment is finished. When all the segments are finished, we generate a stream as usual, i.e. we still need to copy the file. It may be possible in future to simply truncate the file but in many cases we need to decompress or filter, and there are significant issues with code complexity and seeks during FEC decodes, see bug #6063. KEY LIST: The original key list. Not changed when a block is fetched. - Fixed and checksummed (each segment has a checksum). SEGMENT STATUS: The status of each segment, including the status of each block, including flags and where it is in the block storage within the segment. - Checksummed per segment. So it needs to be written as a whole segment. Can be regenerated from the block store and key list, which happens routinely when FEC decoding. BLOOM FILTERS: Main bloom filter. Segment bloom filters. ORIGINAL METADATA: For extra robustness, keep the full original metadata. ORIGINAL URL: If the original key is available, keep that too. BASIC SETTINGS: Type of splitfile, length of file, overall decryption key, number of blocks and check blocks per segment, etc. - Fixed and checksummed. Read as a block so we can check the checksum. FOOTER: Length of basic settings. (So we can seek back to get them) Version number. Checksum. Magic value. OTHER NOTES: CHECKSUMS: 4-byte CRC32. CONCURRENCY: Callbacks into fetcher should be run off-thread, as they will usually be inside a MemoryLimitedJob. LOCKING: Trivial or taken last. Hence can be called inside e.g. RGA calls to getCooldownTime etc. PERSISTENCE: This whole class is transient. It is recreated on startup by the SplitFileFetcher. Many of the fields are also transient, e.g. SplitFileFetcherSegmentStorage's cooldown fields.
Modifier and Type | Field and Description |
---|---|
FECCodec |
fecCodec
FEC codec for the splitfile, if needed.
|
Constructor and Description |
---|
SplitFileFetcherStorage(LockableRandomAccessBuffer raf,
boolean realTime,
SplitFileFetcherStorageCallback callback,
FetchContext origContext,
RandomSource random,
PersistentJobRunner exec,
KeysFetchingLocally keysFetching,
Ticker ticker,
MemoryLimitedJobRunner memoryLimitedJobRunner,
ChecksumChecker checker,
boolean newSalt,
KeySalter salt,
boolean resumed,
boolean completeViaTruncation)
Construct a SplitFileFetcherStorage from a stored RandomAccessBuffer, and appropriate local
settings passed in.
|
SplitFileFetcherStorage(Metadata metadata,
SplitFileFetcherStorageCallback fetcher,
java.util.List<Compressor.COMPRESSOR_TYPE> decompressors,
ClientMetadata clientMetadata,
boolean topDontCompress,
short topCompatibilityMode,
FetchContext origFetchContext,
boolean realTime,
KeySalter salt,
FreenetURI thisKey,
FreenetURI origKey,
boolean isFinalFetch,
byte[] clientDetails,
RandomSource random,
BucketFactory tempBucketFactory,
LockableRandomAccessBufferFactory rafFactory,
PersistentJobRunner exec,
Ticker ticker,
MemoryLimitedJobRunner memoryLimitedJobRunner,
ChecksumChecker checker,
boolean persistent,
java.io.File storageFile,
FileRandomAccessBufferFactory diskSpaceCheckingRAFFactory,
KeysFetchingLocally keysFetching)
Construct a new SplitFileFetcherStorage from metadata.
|
Modifier and Type | Method and Description |
---|---|
freenet.client.async.SplitFileFetcherStorage.MyKey |
chooseRandomKey()
Choose a random key which can be fetched at the moment.
|
long |
countSendableKeys() |
long |
countUnfetchedKeys() |
void |
fail(FetchException e)
Fail the request, off-thread.
|
void |
failedBlock() |
void |
failOnDiskError(ChecksumFailedException e) |
void |
failOnDiskError(java.io.IOException e) |
void |
failOnSegment(SplitFileFetcherSegmentStorage segment)
A segment ran out of retries.
|
void |
finishedCheckingDatastoreOnLocalRequest(ClientContext context)
Local only is true and we've finished checking the datastore.
|
void |
finishedFetcher() |
void |
finishedSuccess(SplitFileFetcherSegmentStorage segment)
A segment successfully completed.
|
long |
getCooldownWakeupTime(long now)
Returns -1 if the request is finished, otherwise the wakeup time.
|
ClientKey |
getKey(freenet.client.async.SplitFileFetcherStorage.MyKey key) |
short |
getPriorityClass() |
boolean |
hasCheckedStore() |
boolean |
lastBlockMightNotBePadded() |
void |
lazyWriteMetadata() |
Key[] |
listUnfetchedKeys() |
int |
maxRetries() |
void |
maybeClearCooldown()
Called when a segment exits cooldown e.g.
|
void |
onFailure(freenet.client.async.SplitFileFetcherStorage.MyKey key,
FetchException fe) |
void |
restartedAfterDataCorruption(boolean wasCorrupt) |
void |
setHasCheckedStore(ClientContext context) |
boolean |
start(boolean resume)
Start the storage layer.
|
StreamGenerator |
streamGenerator() |
public final FECCodec fecCodec
public SplitFileFetcherStorage(Metadata metadata, SplitFileFetcherStorageCallback fetcher, java.util.List<Compressor.COMPRESSOR_TYPE> decompressors, ClientMetadata clientMetadata, boolean topDontCompress, short topCompatibilityMode, FetchContext origFetchContext, boolean realTime, KeySalter salt, FreenetURI thisKey, FreenetURI origKey, boolean isFinalFetch, byte[] clientDetails, RandomSource random, BucketFactory tempBucketFactory, LockableRandomAccessBufferFactory rafFactory, PersistentJobRunner exec, Ticker ticker, MemoryLimitedJobRunner memoryLimitedJobRunner, ChecksumChecker checker, boolean persistent, java.io.File storageFile, FileRandomAccessBufferFactory diskSpaceCheckingRAFFactory, KeysFetchingLocally keysFetching) throws FetchException, MetadataParseException, java.io.IOException
metadata
- fetcher
- decompressors
- clientMetadata
- topDontCompress
- topCompatibilityMode
- origFetchContext
- realTime
- salt
- thisKey
- origKey
- isFinalFetch
- clientDetails
- random
- tempBucketFactory
- rafFactory
- exec
- ticker
- memoryLimitedJobRunner
- checker
- persistent
- storageFile
- If non-null, we will use this file to store the data in. It must already
exist, and must be 0 bytes long. We will use it, and then when complete, truncate the file
so it only contains the final data before calling onSuccess(). Also, in this case,
rafFactory must be a DiskSpaceCheckingRandomAccessBufferFactory.diskSpaceCheckingRAFFactory
- keysFetching
- Must be passed in at this point as we will need it later. However, none
of this is persisted directly, so this is not a problem.FetchException
- If we failed to set up the download due to a problem with the metadata.MetadataParseException
java.io.IOException
- If we were unable to create the file to store the metadata and
downloaded blocks in.public SplitFileFetcherStorage(LockableRandomAccessBuffer raf, boolean realTime, SplitFileFetcherStorageCallback callback, FetchContext origContext, RandomSource random, PersistentJobRunner exec, KeysFetchingLocally keysFetching, Ticker ticker, MemoryLimitedJobRunner memoryLimitedJobRunner, ChecksumChecker checker, boolean newSalt, KeySalter salt, boolean resumed, boolean completeViaTruncation) throws java.io.IOException, StorageFormatException, FetchException
newSalt
- True if the global salt has changed.salt
- The global salter. Should be passed in even if the global salt hasn't changed,
as we may not have completed regenerating bloom filters.java.io.IOException
- If the restore failed because of a failure to read from disk.StorageFormatException
FetchException
- If the request has already failed (but it wasn't processed before
restarting).public boolean start(boolean resume)
resume
- True only if we are restarting without having serialized, i.e. from the file
only. In this case we will need to tell the parent how many blocks have been fetched.public short getPriorityClass()
public void finishedSuccess(SplitFileFetcherSegmentStorage segment)
PersistenceDisabledException
public StreamGenerator streamGenerator()
public void lazyWriteMetadata()
public void finishedFetcher()
public void fail(FetchException e)
e
- public void failOnSegment(SplitFileFetcherSegmentStorage segment)
segment
- The segment that failed.public void failOnDiskError(java.io.IOException e)
public void failOnDiskError(ChecksumFailedException e)
public long countUnfetchedKeys()
public Key[] listUnfetchedKeys()
public long countSendableKeys()
public freenet.client.async.SplitFileFetcherStorage.MyKey chooseRandomKey()
public void finishedCheckingDatastoreOnLocalRequest(ClientContext context)
public void onFailure(freenet.client.async.SplitFileFetcherStorage.MyKey key, FetchException fe)
public ClientKey getKey(freenet.client.async.SplitFileFetcherStorage.MyKey key)
public int maxRetries()
public void failedBlock()
public boolean lastBlockMightNotBePadded()
public void restartedAfterDataCorruption(boolean wasCorrupt)
public void maybeClearCooldown()
public long getCooldownWakeupTime(long now)
public void setHasCheckedStore(ClientContext context)
public boolean hasCheckedStore()