Head In Cloud BVBAhttps://headincloud.be/blog/en-usFri, 10 Feb 2017 10:35:00 +0000Consul ports and what they are used forhttps://headincloud.be/blog/article/consul-ports-and-what-they-are-used-for/<p>Regarding firewalls, this depends on your particular implementation. On a Consul server, you probably want to allow communications on all the ports mentioned above.</p> <p>On a Consul agent, things get more tricky. Port 8301 needs to be open, as this is required for communication with other agents and servers. Ports 8400, 8500, 8600 depend on your use-case. If you install a consul agent on every node, there is no need to open those ports in the host firewall. Your applications can just use to communicate with the API and DNS interface.</p> Fri, 10 Feb 2017 10:35:00 +0000https://headincloud.be/blog/article/consul-ports-and-what-they-are-used-for/When SysOps need workflow.... Introducing Apache NiFi.https://headincloud.be/blog/article/when-sysops-need-workflow-introducing-apache-nifi/<h2>What is Apache NiFi?</h2> <p>Apache NiFi is a dataflow tool that is quickly becoming quite popular in the Big Data world. According to the website, NiFi is:</p> <p><em>...an easy to use, powerful, and reliable system to process and distribute data.</em></p> <p>I think the Apache NiFi guys are being a bit too modest here :-) The way I would describe NiFi is:</p> <p><em>Apache NiFi is a web-based tool that allows you to get data from almost any source, and transform/route it to almost any destination using an intuitive WYSIWYG workflow designer.</em> </p> <p>At the moment, you can receive/send data from/to the following data sources:</p> <ul> <li>local files</li> <li>HTTP/HTTPS (very handy if you want to integrate with cloud-based services like PagerDuty, HipChat, Slack, Twilio)</li> <li>Syslog</li> <li>S3</li> <li>Twitter</li> <li>FTP</li> <li>SQS</li> <li>Apache Kafka</li> <li>Probably a lot more... :-)</li> </ul> <h2>How can Apache NiFi help in System Operations?</h2> <p>As a system operator, you probably deal with a lot of data already that needs to be processed and evaluated. Over the years, you probably developed your own solutions to deal with this data. Did you ever create scripts for one or more of these tasks:</p> <ul> <li>Post an alert to a website when a system goes down?</li> <li>Ship log files to another system for further analysis (via FTP, or to S3)?</li> <li>Send an SMS when something happens that's not supposed to happen?</li> </ul> <p>If the answer is yes to any of the questions, then NiFi might be an asset for your IT environment. True, writing your own scripts to solve those issues can give you a high sense of satisfaction, but the most important issue with this approach is this:</p> <p><strong>System operators should be focused on the data your environment generates, and not the code that processes that data.</strong></p> <p>Okay, some readers are probably rolling their eyes right now, but allow me to elaborate. First, let me ask you a few questions about the integration-scripts you developed yourself:</p> <ul> <li>Can your script handle a network loss when it is in the middle of processing data?</li> <li>Does it scale up to multiple threads?</li> <li>How well does it perform when it suddenly needs to process more data (like 10x) compared to the usual load?</li> <li>Do you have a central dashboard that shows the data flow happening in your scripts?</li> </ul> <p>As someone in system operations, you probably don't want to deal with all the "details" mentioned above, <em>you just want to get your data, transform it to what you want it to be, and send it to where it needs to go.</em> </p> <p>Maybe you have a team of coders that can handle those issues mentioned above, but they are probably busy developing your company's product, and probably don't have the resources either to assist you every time. You might consider a proprietary solution, but most of the time you will be stuck with what the vendor offers. You want tools that adapt to your workflow, not the other way around. Apache NiFi is free, and allows you to create any workflow you want, with any data you want.</p> <h2>How does Apache NiFi compare to an ELK stack?</h2> <p>If you are already using an ElasticSearch-LogStash-Kibana (ELK) stack, you might wonder how Apache NiFi fits in. In my opinion, they are two different systems that complement each other:</p> <ul> <li>ELK is great for historical analysis of your data.</li> <li>Apache NiFi is great of realtime processing of your data. </li> </ul> <h2>Example 1: Building a Syslog server.</h2> <p>I admit, I'm a big fan of ChatOps. Having a chat-room as the primary hub of communication for your operations team encourages teamwork, and makes it a lot easier to work with remote teams in different time zones as they have access to all the conversations that happened when they were still asleep :-)</p> <p>One of the things I wanted, was a chat-room that acts as a live-feed of all the syslog messages generated by my servers. This is the first workflow I built in NiFi, and I was surprised I had everything up and running in less than 3 hours. Mind you, I had zero experience with NiFi when I built this, so I still needed to get the hang of it. If I had to develop this in a programming language I had no prior experience with, I think it would have taken longer than 3 hours.</p> <p>I use HipChat for team chatrooms, so I need to format the data to something that HipChat expects, before posting it to the API HTTP server.</p> <p>Here is what I ended up with:</p> <p><img src="//content.headincloud.be/media/nifi1.png" alt="nifi screenshot" /></p> <p>Take a good look at the picture. Even without any NiFi experience, it's quite easy to figure out what's going on:</p> <ul> <li>NiFi starts a syslog listener.</li> <li>Some attributes are added which are required for HipChat formatting.</li> <li>If it's an error, we add another attribute that will cause the message to be displayed in red. If not, it is displayed in green.</li> <li>The last steps just transform the data to JSON, add the correct MIME type, and do a HTTP POST to the HipChat API server.</li> </ul> <p>The only thing left to do, was to reconfigure my servers so syslog messages get forwarded to my NiFi server.</p> <p>The output as shown in HipChat:</p> <p><img src="//content.headincloud.be/media/nifi2.png" alt="hipchat screenshot" /></p> <p>The formatting could be improved, but it ain't bad for a first attempt :-)</p> <h2>Example 2: Building a HTTP to FTP gateway</h2> <p>Here is another example that shows how you can easily build a HTTP-to-FTP gateway with NiFi:</p> <p><img src="//content.headincloud.be/media/nifi1.png" alt="nifi screenshot 2" /></p> <p>Once again, the flow is quite easy to follow:</p> <ul> <li>NiFi listens for HTTP requests. Files can be uploaded via a HTTP POST request.</li> <li>NiFi uploads the file to the FTP server and sends out an e-mail about the successful upload.</li> <li>If the FTP transfer fails, the file is stored locally for further inspection, and an e-mail is sent out to notify the administrators.</li> </ul> <p><strong>Time to implement:</strong> 30 minutes more or less. Once again, no coding required.</p> <h2>Batch or real-time? Single-threaded or multi-threaded?</h2> <p>So, is NiFi optimized for real-time processing or batch-processing? The answer is simple: it depends on how you configure it. Every box in the diagram is called a <em>"processor"</em>, and its throughput can be configured and tuned to your own wishes:</p> <p><img src="//content.headincloud.be/media/nifi4.png" alt="nifi screenshot 3" /></p> <h2>Conclusion.</h2> <p>I believe that Apache NiFi is a valuable asset to manage the data flow of your IT environment. I have a simple test to determine if a tool is worthwhile to me or not: if I can come up with more than 3 scenarios where this particular tool can help me, I consider it a winner. Apache NiFi beats that test without any doubt.</p> <p>While the examples shown here are quite simple, it can handle very complex workflows, allows flows to be arranged in different process groups, and NiFi server also supports clustering.</p> <h2>Additional information.</h2> <p>I only scratched the surface of what Apache NiFi can do. There is a great introduction video from OSCON 2015, given by Joe Witt of HortonWorks. <a href="https://www.youtube.com/watch?v=sQCgtCoZyFQ" title="Nifi video">I recommend you check it out</a>.</p> Fri, 29 Jan 2016 22:57:00 +0000https://headincloud.be/blog/article/when-sysops-need-workflow-introducing-apache-nifi/Using DynamoDB as a Django settings storehttps://headincloud.be/blog/article/using-dynamodb-as-a-django-settings-store/<h2>Table of contents</h2> <p>[TOC]</p> <h2>Django settings overview</h2> <p>When you don't specify a settings-module for your Django project, the settings.py which is located in your project folder will be used. You can override the settings module in two ways:</p> <ul> <li>Via the command line using the <code>--settings=</code> parameter.</li> <li>Via the <code>DJANGO_SETTINGS_MODULE</code> environment variable.</li> </ul> <p>In the past, I used a separate settings module for each environment, which resulted in multiple settings modules in my codebase:</p> <p><code>/project/settings.py</code> (for local development) <br /> <code>/project/settings_test.py</code> (for test environment) <br /> <code>/project/settings_prod.py</code> (for production environment) </p> <p>I think most people who started developing with Django did it this way initially, however, there are a few drawbacks to this technique:</p> <ul> <li><p>Disclosure of sensitive information: Keeping settings in your codebase directly, means you also have sensitive information like database usernames and passwords in your codebase.</p></li> <li><p>Subtle changes in test vs production: You add a parameter in your test environment settings module, but you forget to add the parameter in your production settings module.</p></li> <li><p>Changing settings requires deployment: This one speaks for itself. Changing settings should not require a new deployment of your application.</p></li> </ul> <h2>Test vs. production state</h2> <p>While settings on your local development machine can differ from your production environment (after all, during development we experiment with new things), for actual deployment, we want our test environment to match our production environment as close as possible. In order to make this possible, we need the following three conditions:</p> <ul> <li>We need to <em>dynamically</em> determine our environment during application startup.</li> <li>Based on the information we got in the previous step, load the configuration associated with the environment we are currently running in.</li> <li>Our settings loading mechanism should be identical in both test and production.</li> </ul> <h2>Let's bring in AWS</h2> <p>Let's see how we can establish those three steps, with a little help from AWS :-)</p> <h3>Determine our environment</h3> <p>Since we run our application on EC2 instances, we can use tags to identify our environment. For example, for every instance we launch in our test environment, we can add the following tags:</p> <p><code> Environment: test-myapp01 Environment-role: test-myapp01-website </code></p> <p>The <code>Environment-role</code> tag is added to quickly identify EC2 instances when your application consists of multiple components. For example, you might also have a role called <code>test-myapp01-mailgateway</code> if your application sends out email and the mail server is on a different instance.</p> <p>If you use tools like CloudFormation or TerraForm (and I really recommend you do), you can have those tags added automatically every time you make a change to your infrastructure.</p> <p>During startup of our application, we can determine our environment by querying the meta-data of the instance we are running on.</p> <h3>Loading the configuration associated with our environment</h3> <p>Since our environment is now identified, we can easily load our configuration. I choose DynamoDB as the repository for the application settings, since it's highly-available in your AWS region, it's cheap, and you can manage it via the AWS console.</p> <h3>Unifying our settings loader</h3> <p>In this setup I only have two settings modules in my codebase:</p> <p><code>/project/settings.py</code> (for local development) <br /> <code>/project/settings_deploy.py</code> (for test and production environment) </p> <p><code>settings_deploy.py</code> will retrieve the EC2 tags associated with the instance it is running on, and retrieve the settings from the DynamoDB table.</p> <h2>Implementation</h2> <p><strong>DISCLAIMER: This is just a proof-of-concept, and not production-quality code.</strong></p> <h3>Creating the DynamoDB table</h3> <p>The name of the table should be related to the environment-role we run in. For example, if our environment-role is <em>test-myapp-website</em>, we need to create a DynamoDB table that is called <em>test-myapp-website-config</em>.</p> <p>We will use the AWS command-line tools to do this:</p> <p><code> aws dynamodb create-table --table-name=test-myapp-website-config \ --attribute-definitions AttributeName=Parameter,AttributeType=S \ --key-schema AttributeName=Parameter,KeyType=HASH \ --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 </code> Next, we will fill this table with some default settings. Create the file <code>test-myapp-website-config.json</code> with the following content:</p> <p><code> { "test-myapp-website-config": [ { "PutRequest": { "Item": { "Parameter": {"S": "debug"}, "Value": {"BOOL": true} } } }, { "PutRequest": { "Item": { "Parameter": {"S": "db_host"}, "Value": {"S": "my.test.server"} } } }, { "PutRequest": { "Item": { "Parameter": {"S": "db_name"}, "Value": {"S": "my_db"} } } }, { "PutRequest": { "Item": { "Parameter": {"S": "db_user"}, "Value": {"S": "my_username"} } } }, { "PutRequest": { "Item": { "Parameter": {"S": "db_pass"}, "Value": {"S": "my_password"} } } }, { "PutRequest": { "Item": { "Parameter": {"S": "db_port"}, "Value": {"S": "5432"} } } } ] } </code></p> <p>Next step, load this file into your DynamoDB table:</p> <p><code>aws dynamodb batch-write-item --request-items file://test-myapp-website-config.json</code></p> <h3>Loading our settings from Django</h3> <p>Make sure you have Boto and Requests installed: <code> pip install requests pip install boto </code></p> <p>At the top of our <code>settings_deploy.py</code> file, we can add the following code to retrieve the value of our <code>Environment-role</code> tag:</p> <p>```</p> <h1>get environment</h1> <p>r = requests.get('') if r.status<em>code == requests.codes.ok: instance</em>id = r.text conn = ec2.connect<em>to</em>region(AWS<em>REGION</em>NAME) reservations = conn.get<em>all</em>instances() for res in reservations: for inst in res.instances: if inst.<strong>dict</strong>['id'] == instance<em>id: AWS</em>ENV = inst.<strong>dict</strong>['tags']['Environment-role']</p> <p>```</p> <p>Now we can construct the name of our table, and connect to it:</p> <p><code> dynamo_conn = dynamodb.connect_to_region(AWS_REGION_NAME) config_table = dynamo_conn.get_table('{}-config'.format(AWS_ENV)) </code></p> <p>After we connected to the table, we can retrieve our settings:</p> <p>``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql<em>psycopg2', 'NAME': config</em>table.get<em>item(hash</em>key='db<em>name')['Value'], 'USER': config</em>table.get<em>item(hash</em>key='db<em>user')['Value'], 'PASSWORD': config</em>table.get<em>item(hash</em>key='db<em>pass')['Value'], 'HOST': config</em>table.get<em>item(hash</em>key='db<em>host')['Value'], 'PORT': config</em>table.get<em>item(hash</em>key='db_port')['Value'], } }</p> <p>```</p> <h3>IAM access role</h3> <p>Our instances need some additional permissions, to read the EC2 tags, and read the DynamoDB table. Add the following to your instance's IAM role:</p> <p>``` { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeTags" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "dynamodb:DescribeTable", "dynamodb:GetItem", "dynamodb:BatchGetItem" ], "Resource": [ "<arn of your dynamoDB table>" ] }</p> <p>```</p> <h3>Repeat for your production environment</h3> <p>You can now create a similar table for your production environment, and tag your production instances in the same way.</p> <h2>Final words</h2> <p>This is just a quick example, and you might want to do some extra work before you start implementing this:</p> <ul> <li>Provide error checking to make sure the table and values exist in DynamoDB.</li> <li>Use <code>BatchGetItem</code> to retrieve all settings in one go.</li> </ul> <p>Also take a look at Dynamodb-config-store: <a href="https://github.com/sebdah/dynamodb-config-store">https://github.com/sebdah/dynamodb-config-store</a>.</p> <h2>References</h2> <p>The Twelve-Factor App: <a href="http://12factor.net">http://12factor.net</a> <br /> How to manage production/staging/dev Django settings: <a href="https://discussion.heroku.com/t/how-to-manage-production-staging-dev-django-settings/21">https://discussion.heroku.com/t/how-to-manage-production-staging-dev-django-settings/21</a> <br /> An Introduction to boto’s DynamoDB interface: <a href="http://boto.readthedocs.org/en/2.3.0/dynamodb_tut.html">http://boto.readthedocs.org/en/2.3.0/dynamodb_tut.html</a> </p> Fri, 27 Nov 2015 15:00:00 +0000https://headincloud.be/blog/article/using-dynamodb-as-a-django-settings-store/New website launched.https://headincloud.be/blog/article/new-website-launched/<p>We are working on some articles about Django web development and deployment on AWS, so stay tuned.</p> Wed, 11 Nov 2015 22:00:00 +0000https://headincloud.be/blog/article/new-website-launched/