We have out data seperated in chunks now. Now we have to write a script to send this data to the PHP-script or servlet to save it in a file. This is the elementary part of the the javascript-function and also the simple part. Because we append every chunk to the final file, we can’t work with asynchron requests. If you want to, you have to send also the number of the chunk, and also the final filesize to create a dummy-file and replace the part of the sent chunk. For the javascript the changes are no this big, but the server-side script become more complex.
So.. we will use the synchrone style and work with a simple for-loop to run through our array with the data chunks.
var result=null;
for(var i=0;i<chunks.length;i++){
var last=false;
if(i==(chunks.length-1)){
last=true;
}
result=uploadFileChunk(chunks,filenamePrefix+file.name,url,params,last);
}
return result;
Only the result of the request of the last chunk is returned. Because it is the chunk where the magic (copy from temp-folder to real target folder and create and save entity to database, etc) happens.
I send add a prefix to the filename to create a filename like userid+prefix+filename+fileextension (like *.part or somethin like this). So a user can upload more than one file with the same filename, without cause problems on server side.
But you should also send the original filename as something like "clearname". In this example you can add it to die params-array (params["clearname"]=file.name;) and use it as filename in the db-entity to set this name in header-data on download of this file or in lists of the uploaded files.
function uploadFileChunk(chunk,filename,url,params,lastChunk) {
var formData = new FormData();
formData.append('upfile', chunk, filename);
formData.append("filename",filename);
for(key in params){
formData.append(key,params[key]);
}
Now we have a file-object, we need to split it in small chunks to upload this chunks one by one. JavaScript have the ability to work with files. It can’t access the filesystem directly, but the open- and save-dialogs will do every thing we need. To save data for more than one session you can use the indexeddb. But here we need only open,save and drag and drop.
function createChunksOfFile(file,chunkSize){
var chunks=new Array();
var filesize=file.size;
var counter=0;
while(filesize>counter){
var chunk=file.slice(counter,(counter+chunkSize));
counter=counter+chunkSize;
chunks[chunks.length]=chunk;
}
return chunks;
}
The method is very simple. The loop runs as long as the copied size is smaller as the size of the file. With slice(from,to) you have to set the start- and the endpoint, that is read (in bytes). If the endpoint behind the real end of the file no execption is thrown and only the existing part will be copied. That makes it very easy to us. With every run of the loop we add 250000 to the copied-size till our copied-size is greater than the file-size. In ervery run of the loop we copy/slice the bytes from copied-size to copied-size + 250000 and add this slice-chunk to the output array.
Finally we get an array with alle the sliced chunks of the file in correct order.
Yes.. you can calculate the needed count of chunks and than use a for-loop and calculate the start and end for every part. Works great too… but i did it the other way.
var chunks=createChunksOfFile(file,250000);
So we have all chunks each ~250KB. 1MB should have ~4 Parts. Yes.. it’s 1024Bytes per 1KB.. but we keep it simple :-)
Blog-entries by search-pattern/Tags:
Möchtest Du AdSense-Werbung erlauben und mir damit helfen die laufenden Kosten des Blogs tragen zu können?