General Architecture – Ooggi https://ooggi-dev.com Sun, 03 Apr 2022 18:37:03 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.2 https://ooggi-dev.com/wp-content/uploads/2021/01/chemistry-smaller.png General Architecture – Ooggi https://ooggi-dev.com 32 32 [AWS] A (very) Specific DynamoDB Table https://ooggi-dev.com/challenge-item/132/ Sun, 03 Apr 2022 18:09:04 +0000 https://ooggi-dev.com/?post_type=portfolio&p=1353

Background:

A team working on an important internal company application urgently needs help.
Next week, they’ll be showing off the application prototype to key stakeholders, but there’s a small issue. Their main database just went missing.

One of the developers executed a clean up script with the wrong input parameters and the main DynamoDB table was accidentally deleted.

The loss of data from the table is not an issue at this stage. All data can be easily loaded from a static file, for which a helper script exists.
The real issue is the missing table configuration.

There are no deployment scripts or CloudFormation Templates for the table as far as the team is aware of. The person who manually set up the table and who is generally supporting the developers around AWS is on vacation and can’t be reached.

The following were investigated by the team:

  • Helper script to load data into the table
  • Application code that interacts with the DynamoDB table
  • Technical project documentation

Based on the above, a list of requirements for the table set-up was put together.
They need someone to create the table and save the day.

Bounty:

  • Points: 10
  • Path: Cloud Engineer

Difficulty:

  • Level: 1
  • Estimated time: 1-3 hours

Deliverables:

  • A DynamoDB table configured based on requirements

Prototype description:

A DynamoDB table needs to be created based on the requirements collected by the team.
Once the table is set up, the team will load the sample data and test the application to make sure everything is back to normal.
One of the tests will attempt to write an item into the new DynamoDB table.

Requirements:

General

  • The DynamoDB table name shall start with:

    ooggi-r-events-table-
  • The DynamoDB table shall be deployed in the Ireland (eu-west-1) region

Table properties

  • The table shall be configured with the On-demand, pay per request option
  • The table shall be configured with the Standard Infrequent Access (DynamoDB Standard-IA)
  • The table shall use the “AWS managed key” encryption option (The key is stored in your account and is managed by AWS Key Management Service (AWS KMS)

Table Structure / Indexes

  • Main Table Schema Partition Key – client_id (Data Type: String)
  • Main Table Schema Sort Key – timestamp (Data Type: String)
  • Local secondary Index #1 – amount (Data Type: String), LSI Name: amount-index
  • Local secondary Index #2 – category (Data Type: String), LSI Name: category-index

Testing set-up

  • The Test Helper IAM Role stack should be deployed to allow the testing platform to interact with your DynamoDB table.

Artifacts/Resources:

  • To provide the test environment with permissions to interact with your DynamoDB table, the following stack needs to be deployed:
    CloudFormation Template Link
    *The IAM Role ARN (Amazon Resource Name) should later be provided on the solution submission page.
    *The IAM Role ARN can be found in the Outputs section of the deployed CloudFormation stack.

Your mission, if you choose to accept it,
is to deploy a DynamoDB table based on the requirements.

]]>
[AWS] Mastering DynamoDB Indexes https://ooggi-dev.com/challenge-item/131/ Sat, 02 Apr 2022 13:05:30 +0000 https://ooggi-dev.com/?post_type=portfolio&p=1307

Background:

A popular online learning platform is being built that will allow IT practitioners to submit solutions to various hands-on challenges to verify their skills 🙂
The team is now looking to design the database layer to accompany the Serverless application stack.

Amazon DynamoDB is a top candidate for the job. The Serverless advantages and many others are well understood, but there’s some concern over querying capabilities/flexibility.
If a prototype can be put together to demonstrate the ability of DynamoDB to support the most common likely queries, it would remove most doubts and allow the team to go all in.

The most used table in the high level design should be the one dealing with solution submissions.

The most common queries for this table in order of importance/frequency will be:

  • Queries to create, read and update challenge submission items
  • Queries to fetch all the submission items given a specific challenge ID (As well as the ability to further filter the result set based on the time the submission was created)

The team needs someone with experience in working with DynamoDB indexes to put together a table structure to support the above use case.

Bounty:

  • Points: 20
  • Path: Cloud Engineer

Difficulty:

  • Level: 2
  • Estimated time: 1-6 hours

Deliverables:

  • A DynamoDB table configured based on requirements

Prototype description:

A DynamoDB table set up with the relevant main structure and indexes, populated with the sample items provided.
The table design / indexes will support the following queries:

Main query pattern:

  • Writing, getting and updating a specific challenge submission item based on its submission ID.
    *Allows the app to record submissions and allows the support team to quickly look up
    submission details in case of issues.

Secondary query pattern:

  • Getting all challenge submissions items for a specific challenge.
  • Getting all challenge submissions items for a specific challenge from the last X minutes.
    *Allow the team to monitor submissions for specific challenges. This is useful for
    analytics and monitoring for issues.

Requirements:

General

  • The DynamoDB table name shall start with:

    ooggi-r-challenge-table-
  • The DynamoDB table shall be deployed in the Ireland (eu-west-1) region
  • The table shall be configured with the On-demand, pay per request option.

Sample Items

  • All sample items provided in the dataset.csv resource file should be written/loaded into the table.
  • The Data Types for all properties of the items in DynamoDB shall be set to the String Data Type.

Querying/Indexes

  • Given a specific submission_id in a DynamoDB query the relevant item will be returned with all properties.
  • Given only a specific challenge_id in a DynamoDB query, all submission items for that challenge shall be returned.
  • Given a specific challenge_id and a condition on the submission_time property, the relevant subset of submission items for the specific challenge shall be returned.
    For example: Get all the submission items where challenge_id is equal to X and where the value of submission_time is bigger than Y.

Other properties

  • The table shall not allow for duplicate items with the same submission_id to exist.
  • The table shall support the strong consistency option for queries to get/read specific submission items based on the submission_id.

Testing set-up

  • The Test Helper IAM Role stack should be deployed to allow the testing platform to interact with your DynamoDB table.

Artifacts/Resources:

  • A CSV file with the sample DynamoDB items to be loaded into the table:
    dataset.csv
  • To provide the test environment with permissions to interact with your DynamoDB table, the following stack needs to be deployed:
    CloudFormation Template Link
    *The IAM Role ARN (Amazon Resource Name) should later be provided on the solution submission page.
    *The IAM Role ARN can be found in the Outputs section of the deployed CloudFormation stack.

Your mission, if you choose to accept it,
is to deploy a DynamoDB table with the required indexes as a proof of concept.

]]>
[AWS] Event Processing with Kinesis Data Streams https://ooggi-dev.com/challenge-item/130/ Mon, 14 Feb 2022 15:32:53 +0000 https://ooggi-dev.com/?post_type=portfolio&p=1244

Background:

A team supporting an online marketplace platform is looking to upgrade their event ingestion and processing capabilities. Currently various events are written to their main relational database (PostgreSQL).

Lately, a number of challenges came up around this design.

  • With the ever increasing amount of events, it’s getting more difficult to scale the database. It was vertically scaled to a bigger instance a number of times, but that is only a temporary solution. At some point even the biggest machine won’t do.
  • Many teams need to respond to events in near real time. Having many additional applications continuously polling the database for the most recent changes is not practical in this case.
  • As this is the main production database, it’s handled with great care and different requests for integration go through a lengthy approval process by the database team. This frustrates other teams and slows down progress.

One important use case is that of fraud/anomaly detection. The business is in urgent need of reviewing all transactions in near real time and reporting on any that meet specific criteria. This business logic will first be implemented as a set of manually coded rules, but will later be replaced by a machine learning model.

Due to the above reasons the team has decided to move forward with the implementation of an event ingestion system that will allow various stream processing applications to consume the events in parallel with sub-second latency.

The service picked for the task is Amazon Kinesis Data Stream and the first application prototype will be the anomaly fraud detection system.

Bounty:

  • Points: 30
  • Path: Cloud Engineer

Difficulty:

  • Level: 3
  • Estimated time: 2-12 hours

Deliverables:

  • A system that will provide a Kinesis Data Stream for ingestion and write anomalous events to an output Kinesis Data Stream after applying specific anomaly detection logic

Prototype description:

An Amazon Kinesis Data Stream will be used for ingestion of events.
Events written into the main ingestion stream will be read by the anomaly detection service that will apply the required logic. Events that will be detected as anomalous will be written to an output Kinesis Data Stream.

An IAM Role with the relevant IAM policy is needed to generate temporary credentials that will allow the testing system to access both the main ingestion Kinesis stream and the output / anomaly Kinesis Data Stream.
The test will write different events into the main ingestion stream and will expect to find only anomalous events in the output / anomaly stream.

Requirements:

  • The prototype shall ingest events via a Kinesis stream and write the events that are detected as anomalous to an output Kinesis stream
  • The main ingestion Kinesis stream name shall start with:

    main-input-stream
  • The output / anomaly Kinesis stream name shall start with:

    anomaly-stream
  • All resources shall be deployed in the Ireland (eu-west-1) region
  • Temporary IAM credentials shall be provided for the test that will allow the following API actions on the relevant Kinesis streams
    Main input stream:

    kinesis:PutRecord
    kinesis:PutRecords
    

    Anomaly/output stream:

    kinesis:GetRecords
    kinesis:GetShardIterator
    kinesis:DescribeStream
    kinesis:ListShards
    kinesis:ListStreams
    
  • The client application will write events (JSON) with the following structure into the main ingestion Kinesis stream:

    {"event_id" : "[Event ID]" , "transaction_amount" : [Transaction amount]}
    

    Example event:

    {"event_id" : "12345678" , "transaction_amount" : 120}
    
  • Events with a transaction_amount value higher than 100, shall be written into the output/anomaly stream
  • No action is required at this time on events with a transaction_amount equal or lower than 100
  • The original event (with the same structure and JSON data types) is expected in the output/anomaly stream (only relevant events)
  • The maximum time from event ingestion into the main ingestion stream to the relevant event being available in the output/anomaly stream shall be 2 seconds

Your mission, if you choose to accept it,
is to deploy an ingestion service with Kinesis with a custom event processing component and an output Kinesis stream.

]]>
[AWS] Decouple Main App from DynamoDB with SQS https://ooggi-dev.com/challenge-item/122/ Fri, 04 Feb 2022 10:24:53 +0000 https://ooggi-dev.com/?post_type=portfolio&p=1092

Background:

A new Q&A platform is quickly attracting a large audience. It’s working well, but needs some upgrades to keep pace with its growing customer base and incoming feature requests. One component that requires a review is the question submissions feature.

At this time, the Main App is writing newly submitted questions directly to Amazon DynamoDB. Multiple other components are reading data from the DynamoDB table. One component serves the content to website users, another feeds the analytics service, etc.

There are a couple of issues that need to be addressed with the current design.

  • New submitted question events now need to be enriched. Additional user content needs to be fetched from a different data store and added to the event. This is beyond the scope of the Main App and needs to be handled externally.
  • In the past, a number of misformatted events ended up in DynamoDB causing issues for downstream components. Additional validation controls need to be applied outside of the Main App.
  • Another concern is unpredictable bursts in question submissions. A buffer that could deal with high spikes in writes and provide a steady request rate to DynamoDB would be beneficial.

The Cloud Architecture team has reviewed a variety of designs (Kinesis Streams/Apache Kafka, Amazon EventBridge and more).
At this time, it was decided that introducing an SQS queue with a lightweight event processing service between the Main App and DynamoDB would provide a good balance between a straightforward and (hopefully) easy to implement solution and one that addresses the challenges mentioned above.

Bounty:

  • Points: 30
  • Path: Cloud Engineer

Difficulty:

  • Level: 3
  • Estimated time: 2-12 hours

Deliverables:

  • An SQS queue, a lightweight message processing service and a DynamoDB table, all integrated based on the requirements.

Prototype description:

A standard SQS queue will be used for ingestion of messages. Messages written into the queue will be read by a service that will enrich the events, validate their structure and then write validated requests only into a DynamoDB table.
Basic enriched and validation rules will be provided in the requirement section.
An IAM Role with the relevant IAM policy is required to generate temporary credentials that will allow the testing system to access both the SQS queue and the DynamoDB table.
The test will write messages with specific content to SQS and expect to find the relevant records DynamoDB.

Requirements:

  • The new service shall read the messages from the SQS queue, enrich the messages by adding an additional field and write each message as an item into a DynamoDB table.
  • The SQS Queue name shall start with:

    challenge-queue
  • The DynamoDB table name shall start with:

    challenge-table
  • All resources shall be deployed in the Ireland (eu-west-1) region
  • Temporary IAM credentials shall be provided for the test that will allow the following API actions on the relevant SQS queue and DynamoDB table in your AWS account.

    sqs:SendMessage
    dynamodb:GetItem
    
  • The client application will send messages with the following structure into the SQS queue:

    {"question_id" : "[Question ID]" , "question_text" : "[Question Text]"}
    

    Example message:

    {"question_id" : "1234567" , "question_text" : "How to generate temporary IAM credentials?"}
    
  • The DynamoDB table shall be configured with a Partition Key only (no Range/Sort Key)
  • The question_id attribute shall be configured as the Partition Key
  • All attributes in the DynamoDB table shall be of type String (“question_id”, “question_text”, “user_type”)
  • The following item structure is expected in DynamoDB:

    {"question_id" : "[Question ID]" , "question_text" : "[Question Text]", "user_type" : "[Random String]"}
    

    Example message:

    {"question_id" : "1234567" , "question_text" : "How to generate temporary IAM credentials?", "user_type" : "A+"}
    
  • The maximum time from message ingestion into the SQS queue to the item being available in the DynamoDB table shall be 2 seconds.

Your mission, if you choose to accept it,
is to deploy an ingestion service with SQS, a custom processing component and DynamoDB.

]]>
[AWS] S3 Presigned URLs For The Win https://ooggi-dev.com/challenge-item/121/ Wed, 02 Feb 2022 10:57:39 +0000 https://www.ooggi-dev.com/?post_type=portfolio&p=1082

Background:

The Data Analytics team in a marketing firm is hard at work on their new reporting service. It has been a long time in the making, but once finished, will allow users across the organization to access a personal reporting dashboard with a variety of dashboards and reports. These can be displayed in the browser, but are also available for download as PDF files.

One of the features allows users to receive email notifications with links to the relevant PDF reports. The link in the email is valid for 3 days.
After 3 days, the link in the email will not work, but users will still be able to access the report after signing into their personal reporting dashboard.

The first take at this was to generate and copy any generated report PDF files to an S3 bucket. The bucket is set up with a life cycle policy that is configured with a deletion action after 3 days. The link to an S3 object (specific report) would stop working, once the object was removed from the S3 bucket by the life cycle policy. A random suffix is added to object names to avoid naming conflicts of S3 objects.

One issue that was brought up with the above solution is the inability to set different times dynamically for the expiry of links. Sometimes there’s a need to set the expiry time to something different than 3 days (Based on user, department or report type).
Also, all reports are archived as PDF’s in any case in a separate S3 bucket.
It would save on space, cost and reduce complexity, if the same objects are used and only the links to the objects shared with users will expire (The object will not be deleted or affected, once the link has expired).

Bounty:

  • Points: 10
  • Path: Cloud Engineer

Difficulty:

  • Level: 1
  • Estimated time: 1 hour

Deliverables:

  • An S3 set up supporting pre-signed S3 URLs pointing to an S3 object as described below

Prototype description:

The S3 presigned URL feature would be very useful in this scenario.
The team would like to see how the links can be generated and how expiry time can be set at creation time. They are also interested in the behaviour of expired links and the resulting error type and message.
The S3 presigned URL should provide only temporary access. The S3 object should not be made publicly available.
The proof of concept will consist of a single accessible (active) presigned URL and another URL that has expired.

Requirements:

An active S3 presigned URL with the following properties:

  • The url shall have an expiration time of 3 days (259200 seconds)
  • The presigned URL shall enable access to an S3 object with the following content:

    {
    "Report ID" : "12345",
    "Report": "Looking good"
    }
    
  • The object shall be a simple text file (not a pdf file)
  • The S3 object shall not be publicly available (It should only be accessible via the presigned URL)

An expired S3 presigned URL with the following properties:

  • The presigned URL provided shall respond with an HTTP 403 Forbidden message (The default behaviour for an expired presigned URL)
  • The S3 object shall not be publicly available

Your mission, if you choose to accept it,
is to configure the S3 environment and generate the required URLs.

]]>
[AWS] S3 Static Website Needed https://ooggi-dev.com/challenge-item/120/ Sat, 29 Jan 2022 16:15:37 +0000 https://ooggi-dev.com/?post_type=portfolio&p=1067

Background:

The IT team is looking to improve the customer experience of users of the company website. Currently, the company website is hosted on an N-tier stack with an Application Load Balancer, App/Web servers and an RDS database.

Generally the system has proved reliable, but there have been a number of occasions where the website would simply not load. The users received various generic messages similar to “This site can’t be reached” error from the browser or other default errors from the Application Load Balancer.

On one occasion the production CloudFormation stack for the website was terminated by mistake. A System Administrator wanted to terminate a Dev stack in the Dev account, but forgot that he was logged in to the Prod account in another browser tab/profile :/

Another incident was when the CloudFormation stack was updated but a misconfiguration was introduced to the Application Load Balancer Listener Configuration. This led to the load balancer not working and the website not responding.

There is a lot to be done to improve various processes in order to reduce the likelihood of the system failing, but it’s also important that if things do go wrong, the website users get a nice looking, user friendly error page with the same consistent website design/theme.

The team would like to put in place a DNS failover set-up with Route53 that would redirect users to a separate static error page if the main environment is unhealthy. This static website will be hosted in a separate AWS region to cover an unlikely case of regional failure.

Bounty:

  • Points: 10
  • Path: Cloud Engineer

Difficulty:

  • Level: 1
  • Estimated time: 1 hour

Deliverables:

  • A publicly available S3 Static Website with the relevant content

Prototype description:

At this time, a decision was made to use a publicly available S3 Static Website and host the relevant user-friendly error HTML page there.

Later, a Route53 Failover policy will be configured with a health check so that if the main website environment is not responsive (doesn’t pass the Route53 health check) an Alias record will be used to point to the S3 Static Website endpoint.
The S3 bucket configuration must accommodate the above Route53 failover strategy.

It’s understood that the S3 Static Website feature does not support HTTPS and that this set-up will only work for HTTP. At a later time, CloudFront with a custom domain / SSL certificates setup will be considered on top of the static S3 website.

Requirements:

  • The S3 bucket shall be created in the Ireland (eu-west-1) region.
  • The S3 bucket name shall comply with the following structure:

    www.[your-random-string].com

    *This is to support the Alias record option as part of the Route53 failover policy described above. More information is available in the AWS documentation.
    For example, a valid bucket name can be:

    www.my-random-string.com
  • The S3 Static Website will be available on the following URL:

    http://www.[your-random-string].com.s3-website-eu-west-1.amazonaws.com
  • HTTPS support is not required (Only HTTP access will be checked).
  • The S3 Static Website shall return the following HTML page when accessed on the base path:

    <!doctype html>
    <html>
      <head>
        <title>Title</title>
      </head>
      <body>
        <h1>Something went wrong :/</h1>
        <p>Please try again in a bit.</p>
      </body>
    </html>
    

    For example, when one browses to:

    http://www.[your-random-string].com.s3-website-eu-west-1.amazonaws.com

    They’ll get a basic HTML page with the following content:

    Something went wrong :/
    We’ll be up and running in a bit.
    
  • If the website is accessed on any other path, the same HTML page will be returned, with the relevant HTTP response code.
    For example, the following requests will return the same HTML page as described above with a 404 HTTP (NotFound) response code:

    http://www.[your-random-string].com.s3-website-eu-west-1.amazonaws.com/hi
    http://www.[your-random-string].com.s3-website-eu-west-1.amazonaws.com/foo.html
    

Your mission, if you choose to accept it,
is to set up a static S3 website with the required configuration.

]]>
[AWS] IAM Set Up for an Automated Security Audit https://ooggi-dev.com/challenge-item/115/ Fri, 28 Jan 2022 08:50:24 +0000 https://ooggi-dev.com/?post_type=portfolio&p=1051

Background:

A large enterprise company in the pharmaceutical sector has been using AWS for some time now for a number of key workloads.
Special care is taken around the security of the environment due to the high stakes around the manufacturing process, the intellectual property held by the company, private user information (especially around experiments and protected healthcare information), etc. aAs more teams create AWS accounts and as more workloads are deployed, there’s a concern of a potential misconfiguration that will lead to compromise.

Last week a development EC2 instance was compromised. An older, unused Security Group was attached to the EC2 instance by mistake with a configuration that allowed traffic from all IP’s on all ports to reach the EC2 instance.

While a broader set of tools is implemented to prevent this from happening, there’s an urgent need to inspect all AWS accounts in the organization for the presence of such Security Groups.
An ad-hoc script was put together, but there’s still a need to set up authentication and authorization in a way to allow the “Security Account” (the account in which the tool will be running) to access project/team AWS accounts and call the relevant EC2 API’s for inspection.

Bounty:

  • Points: 20
  • Path: Cloud Engineer

Difficulty:

  • Level: 2
  • Estimated time: 1-2 hours

Deliverables:

  • A set of temporary credentials (Access Key, Secret Key, Token) that will allow the inspection of a remote AWS account

Prototype description:

IAM Roles and policies should be configured with the principle of least privilege in mind. Once these are in place, temporary AWS credentials should be created to allow the caller to describe the EC2 Security Groups across the account.
The credentials will be passed on security to the team and the test will be executed.
If all works well, this setup will be replicated to other AWS accounts in the organization to facilitate a broader inspection.

Requirements:

  • A set of active temporary credentials shall be provided which include an AWS Access Key, Secret Key and Session Token. For Example:
    Access Key ID:

    AKIAIOSFODNN7EXAMPLE

    Secret Access Key:

    wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

    Session Token:

    AQoDYXdzEPT//////////wEXAMPLEtc764bNrC9SAPBSM22wDOk4x4HIZ8j4FZTwdQWLWsKWHGBuFqwAeMicRXmxfpSPfIeoIYRqTflfKD8YUuwthAx7mSEI/qkPpKPi/kMcGdQrmGdeehM4IC1NtBmUpp2wUE8phUZampKsburEDy0KPkyQDYwT7WZ0wq5VSXDvp75YU9HFvlRd8Tx6q6fE8YQcHNVXAkiY9q6d+xo0rKwT38xVqr7ZD0u0iPPkUL64lIZbqBAz+scqKmlzm8FDrypNC9Yjc8fPOLn9FX9KSYvKTr4rvx3iSIlTJabIQwj2ICCR/oLxBA==
  • The temporary credentials shall not be valid for longer than 30 minutes.
  • The provided credentials shall provide to a calling application with the following permission:

    ec2:DescribeSecurityGroups
  • An API call to describe Security Groups in the Ireland (eu-west-1) region shall uncover at least one Security Group with a name prefixed by:

    ooggi-sg-test

    For example:

    ooggi-sg-test-my-sg
  • The Security Group can be created in any VPC as long as the region is the Ireland (eu-west-1) region.
  • Any other API call using the provided temporary credentials (besides ec2:DescribeSecurityGroups) shall fail with the appropriate error response from the AWS API.

Your mission, if you choose to accept it,
is to set up AWS IAM to allow a remote application to inspect the Security Group configuration of an AWS account.

]]>
[AWS] A New Home with CloudFront Request Redirection https://ooggi-dev.com/challenge-item/114/ Thu, 27 Jan 2022 04:14:21 +0000 https://ooggi-dev.com/?post_type=portfolio&p=1027

Background:

A travel company has a set of APIs they expose to other companies.
Information about open reservations and trip schedules can be stored and fetched easily by their B2B customers. The current solution is deployed as a 3 tier web app behind an Application Load Balancer with CloudFront “on top”.
Although this solution has served them well, the variable load on the service is such that significant cost savings could be achieved by moving to a Serverless stack.
Better performance and lower operational overhead are also of interest.

A team was put together to design the new generation API and will be using AWS API Gateway (Edge optimized) with Lambda. The main database will remain DynamoDB as with the current solution.

A variety of HTTP clients across the world rely on the existing API that is exposed via a default Amazon CloudFront DNS name (xxxxxxxxxxxxxx.cloudfront.net).
Many clients don’t have an easy way to update the DNS endpoint. A change will require careful planning and coordination.

A decision was made to move forward and to migrate some of the more “problematic” API’s (As in those that have a spiky traffic pattern and for which the new stack will provide the biggest advantage) in a seamless way.
Later more functionality will be moved to the new solution, until all the functionality is available from a new DNS name resolving to a new API Gateway endpoint.
At that stage, the HTTP clients will be updated to use the new endpoint exclusively and the old environment will be decommissioned (After a grace period).

One low hanging fruit identified is the following API:

https://xxxxxxxxxxxxxx.cloudfront.net/reservations

More specifically the HTTP GET method associated with the above path/resource, which is by far the most common request.

Bounty:

  • Points: 30
  • Path: Cloud Engineer

Difficulty:

  • Level: 3
  • Estimated time: 1-12 hours

Deliverables:

  • A live CloudFront deployment with a redirect feature (implemented with CloudFront only or with a custom Origin)

Prototype description:

The team has agreed that a good solution at this time would be to set up an HTTP redirect (301 response code) for the GET method on the resource/path mentioned earlier (https://xxxxxxxxxxxxxx.cloudfront.net/reservations).

The existing 3-tier web application stack behind the ALB can’t be updated or changed.
The redirect needs to be implemented before the request reaches the web/application servers themselves.

A prototype should be created with Amazon CloudFront to simulate the existing environment that will demonstrate this capability. When a specific path is accessed with a GET request, the HTTP client should be redirected to a new domain name as described in the requirements below.
Solutions that will minimize the latency added by the redirect are preferred (A redirect as close as possible to the HTTP client).

Requirements:

  • The prototype shall be deployed using the AWS CloudFront service.
    The URL provided for the redirection test must have the following structure:

    https://[distribution-id].cloudfront.net
  • A GET request to the following path:

    https://[distribution-id].cloudfront.net/reservations

    Shall result in a HTTP 301 Redirect response to the following location:

    https://ooggi.com/the_blog/
  • A GET request to a different path shall not result in a HTTP 301 Redirect response. For example:

    https://[distribution-id].cloudfront.net/
    https://[distribution-id].cloudfront.net/cases
  • Any other HTTP method (POST, DELETE, etc) to any path shall not result in a HTTP 301 Redirect response.
  • The prototype shall be deployed with a valid HTTPS certificate and accessible with HTTPS.
  • The prototype can be deployed as a public API. There’s no need to implement authentication/authorization on Amazon CloudFront.

Your mission, if you choose to accept it,
is to create a prototype with Amazon CloudFront based on the requirements and share the URL with the customer.

]]>
[AWS] API Prototype with AWS API Gateway https://ooggi-dev.com/challenge-item/103/ Fri, 05 Sep 2014 22:22:42 +0000 http://wp.dev/total-business-1/?post_type=portfolio&p=36

Background:

The customer is a small software company that specializes in the communication sector.
They focus on custom built applications for call traffic monitoring and their solutions are almost always deployed in the client’s on-site/cloud environments.
To make some specific capabilities more accessible and in order to move to a more on-demand pricing model they are exploring the use of HTTP API’s in some areas.
Instead of inline monitoring, these capabilities will provide auxiliary functions and will be easy to integrate into both their own future applications and those of other customers who will want to leverage the API’s directly.
Although some stakeholders are excited, others have raised concerns about security and specifically authentication mechanisms for service to service scenarios.
There’s also no clear vision on how to implement monetization and metering of usage.

Bounty:

  • Points: 20
  • Path: Cloud Engineer

Difficulty:

  • Level: 2
  • Estimated time: 1-6 hours

Deliverables:

  • A live AWS API Gateway deployment set-up based on the requirements

Prototype description:

As a first step the customer would like to see a working prototype with an authentication feature.
The team that will own the final project is familiar with AWS and manages some AWS infrastructure, such as their EC2 based R&D and Dev/Test environments.
They plan to design and deploy the future application infrastructure needed for the API’s in AWS and have a clear strategy around leveraging managed cloud services when possible.
For the above reason, the request is to deploy the first prototype by leveraging the AWS API Gateway service.

Requirements:

  • The prototype shall be deployed using the Amazon API Gateway solution.
  • The API DNS Name shall have the following structure:

    *.execute-api.[region_name].amazonaws.com

    example:

    https://xxxxxxxxxx.execute-api.[region_name].amazonaws.com/test
  • The API shall be deployed in the closest proximity to the company’s current client base in Sydney, Australia. This in order to reduce latency as much as possible.
  • The API shall support authentication by using a key that is specified in the request header.
    API Key to configure as allowed access for the test:

    YpazbHRJmWuaWd7p5y2d

    Request header that will be used in the test:

    x-api-key: "YpazbHRJmWuaWd7p5y2d"
  • A GET request with the above header and key to the following path:

    [API Gateway deployment path]/test

    should result in a 200 HTTP response code with an application/json response type with the following content:

    {"test" : "complete"}
  • Unauthenticated requests or those with the wrong API key to the above path should result in a response with a 403 HTTP code.

Your mission, if you choose to accept it,
is to create a prototype AWS Gateway deployment based on the requirements and share the URL with the customer.

]]>