HomeAISkeleton-based pose annotation labeling utilizing Amazon SageMaker Floor Reality

Skeleton-based pose annotation labeling utilizing Amazon SageMaker Floor Reality


Pose estimation is a pc imaginative and prescient method that detects a set of factors on objects (resembling folks or automobiles) inside pictures or movies. Pose estimation has real-world purposes in sports activities, robotics, safety, augmented actuality, media and leisure, medical purposes, and extra. Pose estimation fashions are educated on pictures or movies which might be annotated with a constant set of factors (coordinates) outlined by a rig. To coach correct pose estimation fashions, you first want to accumulate a big dataset of annotated pictures; many datasets have tens or a whole bunch of hundreds of annotated pictures and take vital assets to construct. Labeling errors are necessary to determine and forestall as a result of mannequin efficiency for pose estimation fashions is closely influenced by labeled information high quality and information quantity.

Free Keyword Rank Tracker
IGP [CPS] WW
TrendWired Solutions

On this submit, we present how you need to use a {custom} labeling workflow in Amazon SageMaker Floor Reality particularly designed for keypoint labeling. This tradition workflow helps streamline the labeling course of and decrease labeling errors, thereby decreasing the price of acquiring high-quality pose labels.

Significance of high-quality information and decreasing labeling errors

Excessive-quality information is prime for coaching sturdy and dependable pose estimation fashions. The accuracy of those fashions is straight tied to the correctness and precision of the labels assigned to every pose keypoint, which, in flip, depends upon the effectiveness of the annotation course of. Moreover, having a considerable quantity of various and well-annotated information ensures that the mannequin can be taught a broad vary of poses, variations, and situations, resulting in improved generalization and efficiency throughout totally different real-world purposes. The acquisition of those giant, annotated datasets entails human annotators who rigorously label pictures with pose info. Whereas labeling factors of curiosity inside the picture, it’s helpful to see the skeletal construction of the thing whereas labeling with a purpose to present visible steering to the annotator. That is useful for figuring out labeling errors earlier than they’re integrated into the dataset like left-right swaps or mislabels (resembling marking a foot as a shoulder). For instance, a labeling error just like the left-right swap made within the following instance can simply be recognized by the crossing of the skeleton rig strains and the mismatching of the colours. These visible cues assist labelers acknowledge errors and can end in a cleaner set of labels.

Because of the guide nature of labeling, acquiring giant and correct labeled datasets will be cost-prohibitive and much more so with an inefficient labeling system. Due to this fact, labeling effectivity and accuracy are crucial when designing your labeling workflow. On this submit, we show learn how to use a {custom} SageMaker Floor Reality labeling workflow to rapidly and precisely annotate pictures, decreasing the burden of growing giant datasets for pose estimation workflows.

Overview of resolution

This resolution supplies a web-based internet portal the place the labeling workforce can use an internet browser to log in, entry labeling jobs, and annotate pictures utilizing the crowd-2nd-skeleton person interface (UI), a {custom} UI designed for keypoint and pose labeling utilizing SageMaker Floor Reality. The annotations or labels created by the labeling workforce are then exported to an Amazon Easy Storage Service (Amazon S3) bucket, the place they can be utilized for downstream processes like coaching deep studying pc imaginative and prescient fashions. This resolution walks you thru learn how to arrange and deploy the mandatory elements to create an internet portal in addition to learn how to create labeling jobs for this labeling workflow.

The next is a diagram of the general structure.

This structure is comprised of a number of key elements, every of which we clarify in additional element within the following sections. This structure supplies the labeling workforce with a web-based internet portal hosted by SageMaker Floor Reality. This portal permits every labeler to log in and see their labeling jobs. After they’ve logged in, the labeler can choose a labeling job and start annotating pictures utilizing the {custom} UI hosted by Amazon CloudFront. We use AWS Lambda features for pre-annotation and post-annotation information processing.

The next screenshot is an instance of the UI.

The labeler can mark particular keypoints on the picture utilizing the UI. The strains between keypoints shall be mechanically drawn for the person primarily based on a skeleton rig definition that the UI makes use of. The UI permits many customizations, resembling the next:

  • Customized keypoint names
  • Configurable keypoint colours
  • Configurable rig line colours
  • Configurable skeleton and rig buildings

Every of those are focused options to enhance the convenience and adaptability of labeling. Particular UI customization particulars will be discovered within the GitHub repo and are summarized later on this submit. Observe that on this submit, we use human pose estimation as a baseline process, however you may broaden it to labeling object pose with a pre-defined rig for different objects as properly, resembling animals or automobiles. Within the following instance, we present how this may be utilized to label the factors of a field truck.

SageMaker Floor Reality

On this resolution, we use SageMaker Floor Reality to offer the labeling workforce with a web-based portal and a approach to handle labeling jobs. This submit assumes that you simply’re aware of SageMaker Floor Reality. For extra info, consult with Amazon SageMaker Floor Reality.

CloudFront distribution

For this resolution, the labeling UI requires a custom-built JavaScript element referred to as the crowd-2nd-skeleton element. This element will be discovered on GitHub as a part of Amazon’s open supply initiatives. The CloudFront distribution shall be used to host the crowd-2nd-skeleton.js, which is required by the SageMaker Floor Reality UI. The CloudFront distribution shall be assigned an origin entry id, which is able to enable the CloudFront distribution to entry the crowd-2nd-skeleton.js residing within the S3 bucket. The S3 bucket will stay non-public and no different objects on this bucket shall be obtainable through the CloudFront distribution as a result of restrictions we place on the origin entry id by way of a bucket coverage. It is a advisable observe for following the least-privilege precept.

Amazon S3 bucket

We use the S3 bucket to retailer the SageMaker Floor Reality enter and output manifest recordsdata, the {custom} UI template, pictures for the labeling jobs, and the JavaScript code wanted for the {custom} UI. This bucket shall be non-public and never accessible to the general public. The bucket may also have a bucket coverage that restricts the CloudFront distribution to solely with the ability to entry the JavaScript code wanted for the UI. This prevents the CloudFront distribution from internet hosting another object within the S3 bucket.

Pre-annotation Lambda perform

SageMaker Floor Reality labeling jobs usually use an enter manifest file, which is in JSON Traces format. This enter manifest file comprises metadata for a labeling job, acts as a reference to the information that must be labeled, and helps configure how the information needs to be introduced to the annotators. The pre-annotation Lambda perform processes gadgets from the enter manifest file earlier than the manifest information is enter to the {custom} UI template. That is the place any formatting or particular modifications to the gadgets will be executed earlier than presenting the information to the annotators within the UI. For extra info on pre-annotation Lambda features, see Pre-annotation Lambda.

Put up-annotation Lambda perform

Just like the pre-annotation Lambda perform, the post-annotation perform handles extra information processing you might need to do after all of the labelers have completed labeling however earlier than writing the ultimate annotation output outcomes. This processing is completed by a Lambda perform, which is accountable for formatting the information for the labeling job output outcomes. On this resolution, we’re merely utilizing it to return the information in our desired output format. For extra info on post-annotation Lambda features, see Put up-annotation Lambda.

Put up-annotation Lambda perform position

We use an AWS Identification and Entry Administration (IAM) position to present the post-annotation Lambda perform entry to the S3 bucket. That is wanted to learn the annotation outcomes and make any modifications earlier than writing out the ultimate outcomes to the output manifest file.

SageMaker Floor Reality position

We use this IAM position to present the SageMaker Floor Reality labeling job the flexibility to invoke the Lambda features and to learn the photographs, manifest recordsdata, and {custom} UI template within the S3 bucket.

Stipulations

For this walkthrough, you must have the next stipulations:

For this resolution, we use the AWS CDK to deploy the structure. Then we create a pattern labeling job, use the annotation portal to label the photographs within the labeling job, and study the labeling outcomes.

Create the AWS CDK stack

After you full all of the stipulations, you’re able to deploy the answer.

Arrange your assets

Full the next steps to arrange your assets:

  1. Obtain the instance stack from the GitHub repo.
  2. Use the cd command to alter into the repository.
  3. Create your Python surroundings and set up required packages (see the repository README.md for extra particulars).
  4. Along with your Python surroundings activated, run the next command:
  5. Run the next command to deploy the AWS CDK:
  6. Run the next command to run the post-deployment script:
    python scripts/post_deployment_script.py

Create a labeling job

After you will have arrange your assets, you’re able to create a labeling job. For the needs of this submit, we create a labeling job utilizing the instance scripts and pictures supplied within the repository.

  1. CD into the scripts listing within the repository.
  2. Obtain the instance pictures from the web by working the next code:
    python scripts/download_example_images.py

This script downloads a set of 10 pictures, which we use in our instance labeling job. We evaluation learn how to use your personal {custom} enter information later on this submit.

  1. Create a labeling job by working to following code:
    python scripts/create_example_labeling_job.py <Labeling Workforce ARN>

This script takes a SageMaker Floor Reality non-public workforce ARN as an argument, which needs to be the ARN for a workforce you will have in the identical account you deployed this structure into. The script will create the enter manifest file for our labeling job, add it to Amazon S3, and create a SageMaker Floor Reality {custom} labeling job. We take a deeper dive into the main points of this script later on this submit.

Label the dataset

After you will have launched the instance labeling job, it would seem on the SageMaker console in addition to the workforce portal.

Within the workforce portal, choose the labeling job and select Begin working.

You’ll be introduced with a picture from the instance dataset. At this level, you need to use the {custom} crowd-2nd-skeleton UI to annotate the photographs. You’ll be able to familiarize your self with the crowd-2nd-skeleton UI by referring to Person Interface Overview. We use the rig definition from the COCO keypoint detection dataset problem because the human pose rig. To reiterate, you may customise this with out our {custom} UI element to take away or add factors primarily based in your necessities.

If you’re completed annotating a picture, select Submit. This can take you to the subsequent picture within the dataset till all pictures are labeled.

Entry the labeling outcomes

When you will have completed labeling all the photographs within the labeling job, SageMaker Floor Reality will invoke the post-annotation Lambda perform and produce an output.manifest file containing the entire annotations. This output.manifest shall be saved within the S3 bucket. In our case, the situation of the output manifest ought to observe the S3 URI path s3://<bucket title> /labeling_jobs/output/<labeling job title>/manifests/output/output.manifest. The output.manifest file is a JSON Traces file, the place every line corresponds to a single picture and its annotations from the labeling workforce. Every JSON Traces merchandise is a JSON object with many fields. The sphere we’re interested by is known as label-results. The worth of this discipline is an object containing the next fields:

  • dataset_object_id – The ID or index of the enter manifest merchandise
  • data_object_s3_uri – The picture’s Amazon S3 URI
  • image_file_name – The picture’s file title
  • image_s3_location – The picture’s Amazon S3 URL
  • original_annotations – The unique annotations (solely set and used if you’re utilizing a pre-annotation workflow)
  • updated_annotations – The annotations for the picture
  • worker_id – The workforce employee who made the annotations
  • no_changes_needed – Whether or not the no modifications wanted verify field was chosen
  • was_modified – Whether or not the annotation information differs from the unique enter information
  • total_time_in_seconds – The time it took the workforce employee to annotation the picture

With these fields, you may entry your annotation outcomes for every picture and do calculations like common time to label a picture.

Create your personal labeling jobs

Now that we’ve got created an instance labeling job and also you perceive the general course of, we stroll you thru the code accountable for creating the manifest file and launching the labeling job. We give attention to the important thing components of the script that you could be need to modify to launch your personal labeling jobs.

We cowl snippets of code from the create_example_labeling_job.py script situated within the GitHub repository. The script begins by organising variables which might be used later within the script. Among the variables are hard-coded for simplicity, whereas others, that are stack dependent, shall be imported dynamically at runtime by fetching the values created from our AWS CDK stack.

# Setup/get variables values from our CDK stack
s3_upload_prefix = "labeling_jobs"
image_dir="scripts/pictures"
manifest_file_name = "example_manifest.txt"
s3_bucket_name = read_ssm_parameter('/crowd_2d_skeleton_example_stack/bucket_name')
pre_annotation_lambda_arn = read_ssm_parameter('/crowd_2d_skeleton_example_stack/pre_annotation_lambda_arn')
post_annotation_lambda_arn = read_ssm_parameter('/crowd_2d_skeleton_example_stack/post_annotation_lambda_arn')
ground_truth_role_arn = read_ssm_parameter('/crowd_2d_skeleton_example_stack/sagemaker_ground_truth_role')
ui_template_s3_uri = f"s3://{s3_bucket_name}/infrastructure/ground_truth_templates/crowd_2d_skeleton_template.html"
s3_image_upload_prefix = f'{s3_upload_prefix}/pictures'
s3_manifest_upload_prefix = f'{s3_upload_prefix}/manifests'
s3_output_prefix = f'{s3_upload_prefix}/output'

The primary key part on this script is the creation of the manifest file. Recall that the manifest file is a JSON strains file that comprises the main points for a SageMaker Floor Reality labeling job. Every JSON Traces object represents one merchandise (for instance, a picture) that must be labeled. For this workflow, the thing ought to include the next fields:

  • source-ref – The Amazon S3 URI to the picture you want to label.
  • annotations – A listing of annotation objects, which is used for pre-annotating workflows. See the crowd-2nd-skeleton documentation for extra particulars on the anticipated values.

The script creates a manifest line for every picture within the picture listing utilizing the next part of code:

# For every picture within the picture listing lets create a manifest line
manifest_items = []
for filename in os.listdir(image_dir):
    if filename.endswith('.jpg') or filename.endswith('.png'):
        img_path = os.path.be a part of(
            image_dir,
            filename
        )
        object_name = os.path.be a part of(
            s3_image_upload_prefix,
            filename
        ).substitute("", "/")

        # add to s3_bucket
        s3_client.upload_file(img_path, s3_bucket_name, object_name)
f
        # add it to manifest file
        manifest_items.append({
            "source-ref": f's3://{s3_bucket_name}/{object_name}',
            "annotations": [],
        })

If you wish to use totally different pictures or level to a distinct picture listing, you may modify that part of the code. Moreover, if you happen to’re utilizing a pre-annotation workflow, you may replace the annotations array with a JSON string consisting of the array and all its annotation objects. The small print of the format of this array are documented within the crowd-2nd-skeleton documentation.

With the manifest line gadgets now created, you may create and add the manifest file to the S3 bucket you created earlier:

# Create Manifest file
manifest_file_contents = "n".be a part of([json.dumps(mi) for mi in manifest_items])
with open(manifest_file_name, "w") as file_handle:
    file_handle.write(manifest_file_contents)

# Add manifest file
object_name = os.path.be a part of(
    s3_manifest_upload_prefix,
    manifest_file_name
).substitute("", "/")
s3_client.upload_file(manifest_file_name, s3_bucket_name, object_name)

Now that you’ve got created a manifest file containing the photographs you need to label, you may create a labeling job. You’ll be able to create the labeling job programmatically utilizing the AWS SDK for Python (Boto3). The code to create a labeling job is as follows:

# Create labeling job
consumer = boto3.consumer("sagemaker")
now = int(spherical(datetime.now().timestamp()))
response = consumer.create_labeling_job(
    LabelingJobName=f"crowd-2nd-skeleton-example-{now}",
    LabelAttributeName="label-results",
    InputConfig={
        "DataSource": {
            "S3DataSource": {"ManifestS3Uri": f's3://{s3_bucket_name}/{object_name}'},
        },
        "DataAttributes": {},
    },
    OutputConfig={
        "S3OutputPath": f"s3://{s3_bucket_name}/{s3_output_prefix}/",
    },
    RoleArn=ground_truth_role_arn,
    HumanTaskConfig={
        "WorkteamArn": workteam_arn,
        "UiConfig": {"UiTemplateS3Uri": ui_template_s3_uri},
        "PreHumanTaskLambdaArn": pre_annotation_lambda_arn,
        "TaskKeywords": ["example"],
        "TaskTitle": f"Crowd 2D Part Instance {now}",
        "TaskDescription": "Crowd 2D Part Instance",
        "NumberOfHumanWorkersPerDataObject": 1,
        "TaskTimeLimitInSeconds": 28800,
        "TaskAvailabilityLifetimeInSeconds": 2592000,
        "MaxConcurrentTaskCount": 123,
        "AnnotationConsolidationConfig": {
            "AnnotationConsolidationLambdaArn": post_annotation_lambda_arn
        },
    },
)
print(response)

The elements of this code you might need to modify are LabelingJobName, TaskTitle, and TaskDescription. The LabelingJobName is the distinctive title of the labeling job that SageMaker will use to reference your job. That is additionally the title that may seem on the SageMaker console. TaskTitle serves the same function, however doesn’t have to be distinctive and would be the title of the job that seems within the workforce portal. You might need to make these extra particular to what you might be labeling or what the labeling job is for. Lastly, we’ve got the TaskDescription discipline. This discipline seems within the workforce portal to offer additional context to the labelers as to what the duty is, resembling directions and steering for the duty. For extra info on these fields in addition to the others, consult with the create_labeling_job documentation.

Make changes to the UI

On this part, we go over a few of the methods you may customise the UI. The next is an inventory of the commonest potential customizations to the UI with a purpose to regulate it to your modeling process:

  • You’ll be able to outline which keypoints will be labeled. This contains the title of the keypoint and its coloration.
  • You’ll be able to change the construction of the skeleton (which keypoints are linked).
  • You’ll be able to change the road colours for particular strains between particular keypoints.

All of those UI customizations are configurable by way of arguments handed into the crowd-2nd-skeleton element, which is the JavaScript element used on this {custom} workflow template. On this template, one can find the utilization of the crowd-2nd-skeleton element. A simplified model is proven within the following code:

<crowd-2nd-skeleton
        imgSrc="https://aws.amazon.com/blogs/machine-learning/skeleton-based-pose-annotation-labeling-using-amazon-sagemaker-ground-truth/{{ process.enter.image_s3_uri" grant_read_access }}"
        keypointClasses="<keypoint courses>"
        skeletonRig='<skeleton rig definition>'
        skeletonBoundingBox='<skeleton bounding field measurement>'
        initialValues="{{ process.enter.initial_values }}"
>

Within the previous code instance, you may see the next attributes on the element: imgSrc, keypointClasses, skeletonRig, skeletonBoundingBox, and intialValues. We describe every attribute’s function within the following sections, however customizing the UI is as simple as altering the values for these attributes, saving the template, and rerunning the post_deployment_script.py we used beforehand.

imgSrc attribute

The imgSrc attribute controls which picture to point out within the UI when labeling. Normally, a distinct picture is used for every manifest line merchandise, so this attribute is commonly populated dynamically utilizing the built-in Liquid templating language. You’ll be able to see within the earlier code instance that the attribute worth is ready to { grant_read_access }, which is Liquid template variable that shall be changed with the precise image_s3_uri worth when the template is being rendered. The rendering course of begins when the person opens a picture for annotation. This course of grabs a line merchandise from the enter manifest file and sends it to the pre-annotation Lambda perform as an occasion.dataObject. The pre-annotation perform takes take the knowledge it wants from the road merchandise and returns a taskInput dictionary, which is then handed to the Liquid rendering engine, which is able to substitute any Liquid variables in your template. For instance, let’s say you will have a manifest file with the next line:

{"source-ref": "https://aws.amazon.com/blogs/machine-learning/skeleton-based-pose-annotation-labeling-using-amazon-sagemaker-ground-truth/s3://my-bucket/exmaple.jpg", "annotations": []}

This information can be handed to the pre-annotation perform. The next code exhibits how the perform extracts the values from the occasion object:

def lambda_handler(occasion, context):
    print("Pre-Annotation Lambda Triggered")
    data_object = occasion["dataObject"]  # this comes straight from the manifest file
    annotations = data_object["annotations"]

    taskInput = {
        "image_s3_uri": data_object["source-ref"],
        "initial_values": json.dumps(annotations)
    }
    return {"taskInput": taskInput, "humanAnnotationRequired": "true"}

The article returned from the perform on this case would seem like the next code:

{
  "taskInput": {
    "image_s3_uri": "https://aws.amazon.com/blogs/machine-learning/skeleton-based-pose-annotation-labeling-using-amazon-sagemaker-ground-truth/s3://my-bucket/exmaple.jpg",
    "annotations": "[]"
  },
  "humanAnnotationRequired": "true"
}

The returned information from the perform is then obtainable to the Liquid template engine, which replaces the template values within the template with the information values returned by the perform. The end result can be one thing like the next code:

<crowd-2nd-skeleton
        imgSrc="https://aws.amazon.com/blogs/machine-learning/skeleton-based-pose-annotation-labeling-using-amazon-sagemaker-ground-truth/s3://my-bucket/exmaple.jpg" <-- This was “injected” into template
        keypointClasses="<keypoint courses>"
        skeletonRig='<skeleton rig definition>'
        skeletonBoundingBox='<skeleton bounding field measurement>'
        initialValues="[]"
>

keypointClasses attribute

The keypointClasses attribute defines which keypoints will seem within the UI and be utilized by the annotators. This attribute takes a JSON string containing an inventory of objects. Every object represents a keypoint. Every keypoint object ought to include the next fields:

  • id – A singular worth to determine that keypoint.
  • coloration – The colour of the keypoint represented as an HTML hex coloration.
  • label – The title or keypoint class.
  • x – This optionally available attribute is simply wanted if you wish to use the draw skeleton performance within the UI. The worth for this attribute is the x place of the keypoint relative to the skeleton’s bounding field. This worth is often obtained by the Skeleton Rig Creator software. If you’re doing keypoint annotations and don’t want to attract a complete skeleton directly, you may set this worth to 0.
  • y – This optionally available attribute is much like x, however for the vertical dimension.

For extra info on the keypointClasses attribute, see the keypointClasses documentation.

skeletonRig attribute

The skeletonRig attribute controls which keypoints ought to have strains drawn between them. This attribute takes a JSON string containing an inventory of keypoint label pairs. Every pair informs the UI which keypoints to attract strains between. For instance, '[["left_ankle","left_knee"],["left_knee","left_hip"]]' informs the UI to attract strains between "left_ankle" and "left_knee" and draw strains between "left_knee" and "left_hip". This may be generated by the Skeleton Rig Creator software.

skeletonBoundingBox attribute

The skeletonBoundingBox attribute is optionally available and solely wanted if you wish to use the draw skeleton performance within the UI. The draw skeleton performance is the flexibility to annotate total skeletons with a single annotation motion. We don’t cowl this function on this submit. The worth for this attribute is the skeleton’s bounding field dimensions. This worth is often obtained by the Skeleton Rig Creator software. If you’re doing keypoint annotations and don’t want to attract a complete skeleton directly, you may set this worth to null. It’s endorsed to make use of the Skeleton Rig Creator software to get this worth.

intialValues attribute

The initialValues attribute is used to pre-populate the UI with annotations obtained from one other course of (resembling one other labeling job or machine studying mannequin). That is helpful when doing adjustment or evaluation jobs. The information for this discipline is often populated dynamically in the identical description for the imgSrc attribute. Extra particulars will be discovered within the crowd-2nd-skeleton documentation.

Clear up

To keep away from incurring future expenses, you must delete the objects in your S3 bucket and delete your AWS CDK stack. You’ll be able to delete your S3 objects through the Amazon SageMaker console or the AWS Command Line Interface (AWS CLI). After you will have deleted the entire S3 objects within the bucket, you may destroy the AWS CDK by working the next code:

This can take away the assets you created earlier.

Issues

Extra steps perhaps wanted to productionize your workflow. Listed here are some concerns relying in your organizations danger profile:

  • Including entry and utility logging
  • Including an internet utility firewall (WAF)
  • Adjusting IAM permissions to observe least privilege

Conclusion

On this submit, we shared the significance of labeling effectivity and accuracy in constructing pose estimation datasets. To assist with each gadgets, we confirmed how you need to use SageMaker Floor Reality to construct {custom} labeling workflows to assist skeleton-based pose labeling duties, aiming to boost effectivity and precision in the course of the labeling course of. We confirmed how one can additional lengthen the code and examples to numerous {custom} pose estimation labeling necessities.

We encourage you to make use of this resolution in your labeling duties and to interact with AWS for help or inquiries associated to {custom} labeling workflows.


Concerning the Authors

Arthur Putnam is a Full-Stack Information Scientist in AWS Skilled Providers. Arthur’s experience is centered round growing and integrating front-end and back-end applied sciences into AI techniques. Outdoors of labor, Arthur enjoys exploring the most recent developments in know-how, spending time together with his household, and having fun with the outside.

Ben Fenker is a Senior Information Scientist in AWS Skilled Providers and has helped clients construct and deploy ML options in industries starting from sports activities to healthcare to manufacturing. He has a Ph.D. in physics from Texas A&M College and 6 years of business expertise. Ben enjoys baseball, studying, and elevating his youngsters.

Jarvis Lee is a Senior Information Scientist with AWS Skilled Providers. He has been with AWS for over six years, working with clients on machine studying and pc imaginative and prescient issues. Outdoors of labor, he enjoys driving bicycles.



Supply hyperlink

latest articles

Lilicloth WW
WidsMob

explore more