DropWizard and AWS
Time to make use of Amazon’s AWS offerings – specifically the Simple Storage Service.
tl;dr / Executive Summary:-
$ curl -L -O https://github.com/AndrewGorton/AwsFileStorage/archive/v1.0.0.zip $ unzip v1.0.0.zip $ cd AwsFileStorage-1.0.0 $ mvn package $ export AWS_ACCESS_KEY=<enter_your_aws_access_key_here> $ export AWS_SECRET_ACCESS_KEY=<enter_your_aws_secret_access_key_here> $ export AWSFILESTORAGE_BUCKET=<enter_a_bucket_which_you_have_full_control_over> $ java -jar target/AwsFileStorage-1.0.0.jar server & $ curl http://localhost:8080/awsfilestorage/ {"keys":["folder1/","folder2/","somekeyhere/"]} |
Firstly you need an AWS account. Log in and go to the S3 console, and create a new bucket (doesn’t matter which geographic region – I used Ireland).
In the IAM console, create a new group with the custom policy below (change ‘your_bucket_name’ to the name of the bucket you just created).
{ "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:GetBucketAcl" ], "Resource": "arn:aws:s3:::your_bucket_name" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetObjectAcl", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::your_bucket_name/*" } ] } |
This limits users to listing all buckets you own, but can only play in the ‘your_bucket_name’ bucket.
Create a user with the ‘Generate an access key for each User’ selected. Note down the Access Key ID and the Secret Access Key (these are the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY respectively).
I cloned my SimpleDropWizardEcho service, and renamed all the files.
We need some AWS S3 goodness, so we add in the S3 libraries to our pom.xml (effectively the below, but it’s a bit more complicated due to version collisions).
com.amazonaws aws-java-sdk 1.7.12 |
then we need a nice class to do the actual S3 listings
public class AWSWrapper { public List getObjects(String path) { List result = new ArrayList(); AmazonS3Client s3client = new AmazonS3Client(new EnvironmentVariableCredentialsProvider()); ListObjectsRequest lor = new ListObjectsRequest(); lor.setBucketName(System.getenv("AWSFILESTORAGE_BUCKET")); lor.setDelimiter(AWS_BUCKET_DELIMITER); if (StringUtils.isNotBlank(path)) { lor.setPrefix(path); } else { lor.setPrefix(""); } ObjectListing ol = s3client.listObjects(lor); for (S3ObjectSummary singleObject : ol.getObjectSummaries()) { result.add(singleObject.getKey()); } for (String singlePrefix : ol.getCommonPrefixes()) { result.add(singlePrefix); } return result; } } |
AmazonS3Client s3client = new AmazonS3Client(new EnvironmentVariableCredentialsProvider());
ListObjectsRequest lor = new ListObjectsRequest();
lor.setBucketName(System.getenv("AWSFILESTORAGE_BUCKET"));
lor.setDelimiter(AWS_BUCKET_DELIMITER);
if (StringUtils.isNotBlank(path)) {
lor.setPrefix(path);
} else {
lor.setPrefix("");
}
ObjectListing ol = s3client.listObjects(lor);
for (S3ObjectSummary singleObject : ol.getObjectSummaries()) {
result.add(singleObject.getKey());
}
for (String singlePrefix : ol.getCommonPrefixes()) {
result.add(singlePrefix);
}
return result;
}
}
and then we need some form of response object which gets returned
public class Listing { private List keys; public Listing(List keys) { this.keys = keys; } @JsonProperty public List getKeys() { return keys; } } |
public Listing(List keys) {
this.keys = keys;
}
@JsonProperty
public List getKeys() {
return keys;
}
}
and now we have to add this into the Resource file to wire up a request to a response (I add two, one for plain URLs, one for a URL with a path-like URL to use).
@GET @Timed @Path("/") public Listing getListing() { AWSWrapper w = new AWSWrapper(); List blobs = w.getObjects(""); return new Listing(blobs); } @GET @Timed @Path("/{path:.*}") public Listing getListing(@PathParam("path") String path, @Context HttpServletRequest req) { if (!path.endsWith("/")) { path = path + "/"; } AWSWrapper w = new AWSWrapper(); List blobs = w.getObjects(path); return new Listing(blobs); } |
@GET
@Timed
@Path("/{path:.*}")
public Listing getListing(@PathParam("path") String path, @Context HttpServletRequest req) {
if (!path.endsWith("/")) {
path = path + "/";
}
AWSWrapper w = new AWSWrapper();
List blobs = w.getObjects(path);
return new Listing(blobs);
}
and we can fire it up and watch it work.
$ mvn package $ export AWS_ACCESS_KEY= $ export AWS_SECRET_ACCESS_KEY= $ export AWSFILESTORAGE_BUCKET= $ java -jar target/AwsFileStorage-1.0.0.jar server & $ curl http://localhost:8080/awsfilestorage/ {"keys":["folder1/","folder2/","somekeyhere/"]} |
Also – if you want to upload a file and are targetting this microservice with a web form, you need an endpoint like
@POST @Timed @Path("/{path:.*}") @Consumes(MediaType.MULTIPART_FORM_DATA) public void uploadObject(@PathParam("path") String path, @FormDataParam("file") final InputStream fileInputStream, @FormDataParam("file") final FormDataContentDisposition contentDispositionHeader) { if (!path.endsWith("/")) { throw new WebApplicationException(Response.Status.NOT_ACCEPTABLE); } String fileName = UUID.randomUUID().toString() + "_" + contentDispositionHeader.getFileName(); java.nio.file.Path outputPath = FileSystems.getDefault().getPath(System.getProperty("java.io.tmpdir"), fileName); try { Files.copy(fileInputStream, outputPath); } catch (IOException e) { throw new WebApplicationException(e); } // Probably should schedule this for an async upload new AWSWrapper().createObject(outputPath, path + contentDispositionHeader.getFileName()); try { Files.deleteIfExists(outputPath); } catch (IOException e) { // Don't care } } |
// Probably should schedule this for an async upload
new AWSWrapper().createObject(outputPath, path + contentDispositionHeader.getFileName());
try {
Files.deleteIfExists(outputPath);
} catch (IOException e) {
// Don't care
}
}
which you can then test with
curl -v -F file=@source.jpg http://localhost:8080/awsfilestorage/some_path_here/ |
where source.jpg is a file which exists on your local harddrive, and some_path_here is the key name under which it will be uploaded. Note that the filename is preserved under the key.
curl -v -o tmp.jpg http://localhost:8080/awsfilestorage/some_path_here/source.jpg |
should download the file you uploaded to tmp.jpg.
Note that there’s no security on any of this code – so if you make this public and install it, then anyone will be able to start using your AWS bucket and you’ll be charged for the storage and transfer of the data!