Quantcast
Channel: Crunchify
Viewing all 1037 articles
Browse latest View live

PayPal Java SDK Complete Example – How to Invoke PayPal Authorization REST API using Java Client?

$
0
0

PayPal Developer Sandbox Account - Java SKD Example by Crunchify

PayPal is one of the best online Payment transfer service out there. There is no doubt it’s growing day by day with crazy numbers.

I personally have hands on experience with PayPal Java Developer APIs and would like to share my experience with all my Crunchify readers.

In this tutorial we will use PayPal Java SDK latest version which is 1.14.0.

Here is a maven dependency for you to add to your Java Eclipse project.

<dependency>
	<groupId>com.paypal.sdk</groupId>
	<artifactId>rest-api-sdk</artifactId>
	<version>LATEST</version>
</dependency>

Once you add above PayPal maven dependencies to your Java project then you will see it on your laptop/desktop.

PayPal Java SKD Example - Import Maven Dependency

Objective of this tutorial

  • Using PayPal API’s you can put hold on your customer’s account
  • Same way you can capture money right away for your purchase
  • You could refund your customer using API
  • Also, void any hold you have put on your account before
  • There are simple steps by which you could perform all above operations and that’s what we will do in this tutorial. Mainly we will put HOLD on customer’s account.

What do I need?

You need PayPal Account. Follow below steps:

  1. Create official PayPal account
  2. Login to PayPal’s developer portal using this link: https://developer.paypal.com/developer/applications
  3. Create new App using this link: https://developer.paypal.com/developer/applications/create
  4. Get ClientID and ClientSecret which we need in our program to generate paypalContext.

PayPal App ClientID and ClientSecret - Crunchify Tutorial

Once y0u have ClientID and ClientSecret, next thing is to start writing Java program crunchifyCapturePayPalAPI.java 🙂

Here is a complete logic for this program:

  1. Create Payer object and set PaymentMethod
  2. Set RedirectUrls and set cancelURL and returnURL
  3. Set Details and Add PaymentDetails
  4. Set Amount
  5. Set Transaction
  6. Add Payment Details and set Intent to authorize
  7. Create APIContext by passing the clientID, clientSecret and mode
  8. Create Payment object and get paymentID
  9. Set payerID to PaymentExecution object
  10. Execute Payment and get Authorization

Complete code:

package crunchify.com.paypal.sdk;

import java.util.ArrayList;
import java.util.List;

import com.paypal.api.payments.Amount;
import com.paypal.api.payments.Authorization;
import com.paypal.api.payments.Details;
import com.paypal.api.payments.Links;
import com.paypal.api.payments.Payer;
import com.paypal.api.payments.Payment;
import com.paypal.api.payments.PaymentExecution;
import com.paypal.api.payments.RedirectUrls;
import com.paypal.api.payments.Transaction;
import com.paypal.base.rest.APIContext;
import com.paypal.base.rest.PayPalRESTException;

/**
 * @author Crunchify.com 
 * Version: 1.1.0
 * 
 */

public class CrunchifyPayPalSDKTutorial {
	private static String crunchifyID = "<!---- Add your clientID Key here ---->";
	private static String crunchifySecret = "<!---- Add your clientSecret Key here ---->";

	private static String executionMode = "sandbox"; // sandbox or production

	public static void main(String args[]) {
		CrunchifyPayPalSDKTutorial crunchifyObj = new CrunchifyPayPalSDKTutorial();

		// How to capture PayPal Payment using Java SDK? doCapture() PayPal SDK call.
		crunchifyObj.crunchifyCapturePayPalAPI();
	}

	// This is simple API call which will capture a specified amount for any given
	// Payer or User
	public void crunchifyCapturePayPalAPI() {

		/*
		 * Flow would look like this: 
		 * 1. Create Payer object and set PaymentMethod 
		 * 2. Set RedirectUrls and set cancelURL and returnURL 
		 * 3. Set Details and Add PaymentDetails
		 * 4. Set Amount
		 * 5. Set Transaction
		 * 6. Add Payment Details and set Intent to "authorize"
		 * 7. Create APIContext by passing the clientID, secret and mode
		 * 8. Create Payment object and get paymentID
		 * 9. Set payerID to PaymentExecution object
		 * 10. Execute Payment and get Authorization
		 * 
		 */

		Payer crunchifyPayer = new Payer();
		crunchifyPayer.setPaymentMethod("paypal");

		// Redirect URLs
		RedirectUrls crunchifyRedirectUrls = new RedirectUrls();
		crunchifyRedirectUrls.setCancelUrl("http://localhost:3000/crunchifyCancel");
		crunchifyRedirectUrls.setReturnUrl("http://localhost:3000/crunchifyReturn");

		// Set Payment Details Object
		Details crunchifyDetails = new Details();
		crunchifyDetails.setShipping("2.22");
		crunchifyDetails.setSubtotal("3.33");
		crunchifyDetails.setTax("1.11");

		// Set Payment amount
		Amount crunchifyAmount = new Amount();
		crunchifyAmount.setCurrency("USD");
		crunchifyAmount.setTotal("6.66");
		crunchifyAmount.setDetails(crunchifyDetails);

		// Set Transaction information
		Transaction crunchifyTransaction = new Transaction();
		crunchifyTransaction.setAmount(crunchifyAmount);
		crunchifyTransaction.setDescription("Crunchify Tutorial: How to Invoke PayPal REST API using Java Client?");
		List<Transaction> crunchifyTransactions = new ArrayList<Transaction>();
		crunchifyTransactions.add(crunchifyTransaction);

		// Add Payment details
		Payment crunchifyPayment = new Payment();
		
		// Set Payment intent to authorize
		crunchifyPayment.setIntent("authorize");
		crunchifyPayment.setPayer(crunchifyPayer);
		crunchifyPayment.setTransactions(crunchifyTransactions);
		crunchifyPayment.setRedirectUrls(crunchifyRedirectUrls);

		// Pass the clientID, secret and mode. The easiest, and most widely used option.
		APIContext crunchifyapiContext = new APIContext(crunchifyID, crunchifySecret, executionMode);

		try {

			Payment myPayment = crunchifyPayment.create(crunchifyapiContext);

			System.out.println("createdPayment Obejct Details ==> " + myPayment.toString());

			// Identifier of the payment resource created 
			crunchifyPayment.setId(myPayment.getId());

			PaymentExecution crunchifyPaymentExecution = new PaymentExecution();

			// Set your PayerID. The ID of the Payer, passed in the `return_url` by PayPal.
			crunchifyPaymentExecution.setPayerId("<!---- Add your PayerID here ---->");

			// This call will fail as user has to access Payment on UI. Programmatically
			// there is no way you can get Payer's consent.
			Payment createdAuthPayment = crunchifyPayment.execute(crunchifyapiContext, crunchifyPaymentExecution);

			// Transactional details including the amount and item details.
			Authorization crunchifyAuthorization = createdAuthPayment.getTransactions().get(0).getRelatedResources().get(0).getAuthorization();

			log("Here is your Authorization ID" + crunchifyAuthorization.getId());

		} catch (PayPalRESTException e) {

			// The "standard" error output stream. This stream is already open and ready to
			// accept output data.
			System.err.println(e.getDetails());
		}
	}

	private void log(String string) {
		System.out.println(string);

	}
}

Eclipse Console Output:

By default PayPal SDK enables DEBUG mode and hence it logs each and every request and response to Eclipse Console.

For detailed information I’ve kept DEBUG mode on and provided detailed result of our getAuthorization Call here.

13:22:28.013 [main] DEBUG com.paypal.base.ConfigManager - sdk_config.properties not present. Skipping...
13:22:28.212 [main] DEBUG com.paypal.base.rest.OAuthTokenCredential - request header: {Authorization=Basic QVNuZG9aaEdGOEg1MFFQSWw5TGl0elhwSDVYTW16YlBwZmxJREFJOGVjUWVwdlJyWVQ4UnBfZUpNQmh1dHJVUHdaZU9CVGJUOE1GRksdfgsdfgsdfg3Q1ZLd0NzZTllRnBIZTNNNWpOR1liNHVxZ3BrNDNFVmFXR2hHNXR2Tk1na1IyZkZMUWdTUmRFY3Q3cG8=, Accept=application/json, User-Agent=PayPalSDK/PayPal-Java-SDK 1.14.0 (v=11; vendor=Oracle Corporation; bit=64; os=Mac_OS_X 10.14.1), Content-Type=application/x-www-form-urlencoded}
13:22:28.212 [main] DEBUG com.paypal.base.rest.OAuthTokenCredential - request body: grant_type=client_credentials
13:22:28.213 [main] DEBUG com.paypal.base.HttpConnection - curl command: 
curl --verbose --request POST 'https://api.sandbox.paypal.com/v1/oauth2/token' \
  --header "Authorization:Basic QVNuZG9aaEdGOEg1MFFQSWw5TGl0elhwSDVYTW16YlBwZmxJREFJOGVjUasdfasdflITlhSZ0xVMGhiOHhpa0M3Q1ZLd0NzZTllRnBIZTNNNWpOR1liNHVxZ3BrNDNFVmFXR2hHNXR2Tk1na1IyZkZMUWdTUmRFY3Q3cG8=" \
  --header "Accept:application/json" \
  --header "User-Agent:PayPalSDK/PayPal-Java-SDK 1.14.0 (v=11; vendor=Oracle Corporation; bit=64; os=Mac_OS_X 10.14.1)" \
  --header "Content-Type:application/x-www-form-urlencoded" \
  --data 'grant_type=client_credentials'
13:22:28.810 [main] DEBUG com.paypal.base.rest.OAuthTokenCredential - response header: {paypal-debug-id=[961e2e4122ac1], null=[HTTP/1.1 200 OK], Paypal-Debug-Id=[961e2e4122ac1], Server=[Apache], Connection=[close], Vary=[Authorization], Set-Cookie=[X-PP-SILOVER=; Expires=Thu, 01 Jan 1970 00:00:01 GMT, X-PP-SILOVER=name%3DSANDBOX3.API.1%26silo_version%3D1880%26app%3Dapiplatformproxyserv%26TIME%3D4098358363%26HTTP_X_PP_AZ_LOCATOR%3Dsandbox.slc; Expires=Mon, 26 Nov 2018 19:52:28 GMT; domain=.paypal.com; path=/; Secure; HttpOnly], HTTP_X_PP_AZ_LOCATOR=[sandbox.slc], Content-Length=[876], X-PAYPAL-TOKEN-SERVICE=[IAAS], Date=[Mon, 26 Nov 2018 19:22:28 GMT], Content-Type=[application/json]}
13:22:28.810 [main] DEBUG com.paypal.base.rest.OAuthTokenCredential - response: {"scope":"https://api.paypal.com/v1/payments/.* https://uri.paypal.com/services/payments/refund https://uri.paypal.com/services/applications/webhooks https://uri.paypal.com/services/payments/payment/authcapture https://uri.paypal.com/payments/payouts https://api.paypal.com/v1/vault/credit-card/.* https://uri.paypal.com/services/disputes/read-seller https://uri.paypal.com/services/subscriptions https://uri.paypal.com/services/disputes/read-buyer https://api.paypal.com/v1/vault/credit-card openid https://uri.paypal.com/services/disputes/update-seller https://uri.paypal.com/services/payments/realtimepayment","nonce":"2018-11-26T19:03:03ZymZQ8MNE2MarndZEjUoxwB70puoxUA-NXqc7pUVtVxk","access_token":"A21AAGyWgsdafxUM_1FCE5d9adsfuwfiOB7_4XkX3wKHWXe3nkKgt2bhadflirJsMWP9JAm-pBT2DtUJ5W0A","token_type":"Bearer","app_id":"APP-80W284ads543T","expires_in":31235}
13:22:28.817 [main] DEBUG com.paypal.base.rest.PayPalResource - request header: {Authorization=Bearer A21AAGyWgsdafxUM_1FCE5d9adsfuwfiOB7_4XkX3wKHWXe3nkKgt2bhadflirJsMWP9JAm-pBT2DtUJ5W0A, User-Agent=PayPalSDK/  (v=11; vendor=Oracle Corporation; bit=64; os=Mac_OS_X 10.14.1), PayPal-Request-Id=74886e72-34e4-4a0-8cd7-6adsf63b5dc9, Accept=application/json, Content-Type=application/json}
13:22:28.817 [main] DEBUG com.paypal.base.rest.PayPalResource - request body: {
  "intent": "sale",
  "payer": {
    "payment_method": "paypal"
  },
  "transactions": [
    {
      "amount": {
        "currency": "USD",
        "total": "6.66",
        "details": {
          "subtotal": "3.33",
          "shipping": "2.22",
          "tax": "1.11"
        }
      },
      "description": "Crunchify Tutorial: How to Invoke PayPal REST API using Java Client?"
    }
  ],
  "redirect_urls": {
    "return_url": "http://localhost:3000/crunchifyReturn",
    "cancel_url": "http://localhost:3000/crunchifyCancel"
  }
}
13:22:28.818 [main] DEBUG com.paypal.base.HttpConnection - curl command: 
curl --verbose --request POST 'https://api.sandbox.paypal.com/v1/payments/payment' \
  --header "Authorization:Bearer A21AAGyWgsdafxUM_1FCE5d9adsfuwfiOB7_4XkX3wKHWXe3nkKgt2bhadflirJsMWP9JAm-pBT2DtUJ5W0A" \
  --header "User-Agent:PayPalSDK/  (v=11; vendor=Oracle Corporation; bit=64; os=Mac_OS_X 10.14.1)" \
  --header "PayPal-Request-Id:74886e72-34e4-4a70-8cd7-605cd63b5dc9" \
  --header "Accept:application/json" \
  --header "Content-Type:application/json" \
  --data '{
  "intent": "sale",
  "payer": {
    "payment_method": "paypal"
  },
  "transactions": [
    {
      "amount": {
        "currency": "USD",
        "total": "6.66",
        "details": {
          "subtotal": "3.33",
          "shipping": "2.22",
          "tax": "1.11"
        }
      },
      "description": "Crunchify Tutorial: How to Invoke PayPal REST API using Java Client?"
    }
  ],
  "redirect_urls": {
    "return_url": "http://localhost:3000/crunchifyReturn",
    "cancel_url": "http://localhost:3000/crunchifyCancel"
  }
}'
13:22:33.407 [main] DEBUG com.paypal.base.rest.PayPalResource - response: {"id":"PAY-25A74012S3552184CLP6EP6A","intent":"sale","state":"created","payer":{"payment_method":"paypal"},"transactions":[{"amount":{"total":"6.66","currency":"USD","details":{"subtotal":"3.33","tax":"1.11","shipping":"2.22"}},"description":"Crunchify Tutorial: How to Invoke PayPal REST API using Java Client?","related_resources":[]}],"create_time":"2018-11-26T19:22:32Z","links":[{"href":"https://api.sandbox.paypal.com/v1/payments/payment/PAY-25A74012S3552184CLP6EP6A","rel":"self","method":"GET"},{"href":"https://www.sandbox.paypal.com/cgi-bin/webscr?cmd=_express-checkout&token=EC-9C691969F1033220V","rel":"approval_url","method":"REDIRECT"},{"href":"https://api.sandbox.paypal.com/v1/payments/payment/PAY-25A74012S3552184CLP6EP6A/execute","rel":"execute","method":"POST"}]}
createdPayment Obejct Details ==> {
  "id": "PAY-25A74012S3552184CLP6EP6A",
  "intent": "sale",
  "payer": {
    "payment_method": "paypal"
  },
  "transactions": [
    {
      "related_resources": [],
      "amount": {
        "currency": "USD",
        "total": "6.66",
        "details": {
          "subtotal": "3.33",
          "shipping": "2.22",
          "tax": "1.11"
        }
      },
      "description": "How to Invoke PayPal REST API using Java Client?"
    }
  ],
  "state": "created",
  "create_time": "2018-11-26T19:22:32Z",
  "links": [
    {
      "href": "https://api.sandbox.paypal.com/v1/payments/payment/PAY-25A74012S3552184CLP6EP6A",
      "rel": "self",
      "method": "GET"
    },
    {
      "href": "https://www.sandbox.paypal.com/cgi-bin/webscr?cmd\u003d_express-checkout\u0026token\u003dEC-9C691969F1033220V",
      "rel": "approval_url",
      "method": "REDIRECT"
    },
    {
      "href": "https://api.sandbox.paypal.com/v1/payments/payment/PAY-25A74012S3552184CLP6EP6A/execute",
      "rel": "execute",
      "method": "POST"
    }
  ]
}
13:22:33.414 [main] DEBUG com.paypal.base.rest.PayPalResource - request header: {Authorization=Bearer A21AAGyWgkMoxUM_1FCE5d948J5SAxIuwfiOB7_4XkX3wKHWXe3nkKgt2bhtXISnazHlE9yzlirJsMWP9JAm-pBT2DtUJ5W0A, User-Agent=PayPalSDK/  (v=11; vendor=Oracle Corporation; bit=64; os=Mac_OS_X 10.14.1), PayPal-Request-Id=e47412c8-2c1f-4505-b1a3-7b4723eb99f4, Accept=application/json, Content-Type=application/json}
13:22:33.415 [main] DEBUG com.paypal.base.rest.PayPalResource - request body: {
  "payer_id": "1Z232AMNN"
}
13:22:33.415 [main] DEBUG com.paypal.base.HttpConnection - curl command: 
curl --verbose --request POST 'https://api.sandbox.paypal.com/v1/payments/payment/PAY-25A74012S3552184CLP6EP6A/execute' \
  --header "Authorization:Bearer A21AAGyWgsdafxUM_1FCE5d9adsfuwfiOB7_4XkX3wKHWXe3nkKgt2bhadflirJsMWP9JAm-pBT2DtUJ5W0A" \
  --header "User-Agent:PayPalSDK/  (v=11; vendor=Oracle Corporation; bit=64; os=Mac_OS_X 10.14.1)" \
  --header "PayPal-Request-Id:eadf412c8-2c1f-4505-b1a3-7basdaf99f4" \
  --header "Accept:application/json" \
  --header "Content-Type:application/json" \
  --data '{
  "payer_id": "1Z232AMNN"
}'
13:22:34.016 [main] ERROR com.paypal.base.HttpConnection - Response code: 400	Error response: {"name":"PAYMENT_NOT_APPROVED_FOR_EXECUTION","message":"Payer has not approved payment","information_link":"https://developer.paypal.com/docs/api/payments/#errors","debug_id":"113d5208bf8d3"}
name: PAYMENT_NOT_APPROVED_FOR_EXECUTION	message: Payer has not approved payment	details: null	debug-id: 113d5208bf8d3	information-link: https://developer.paypal.com/docs/api/payments/#errors

Same way next few tutorials I’l provide more details on how to capture money, refund money and void any authorization you have place on Payer’s account.

The post PayPal Java SDK Complete Example – How to Invoke PayPal Authorization REST API using Java Client? appeared first on Crunchify.


Ansible: How to Execute Commands on remote Hosts and get command result (log) back?

$
0
0

Ansible Execute Command Result Details- Crunchify Tutorial

On Crunchify, we have published quite a few Ansible articles before which includes, installation of Ansible, Copying file from one host to remote host and more.

In this tutorial we will go over steps on how to execute script on remote host after copying it.

This technique is very helpful if you are a IT admin and want to upgrade thousands of VMs and hosts at the same time with just single command.

Ansible is the only togo tool for us at Crunchify, as we deal with lots of hosts for our clients and patch up OS in regular basis.

Let’s get started

We will do below tasks with simple 1 Ansible command:

  1. On Host1: Create file crunchify-script.sh under folder /opt/ashah/
  2. On Host2: Create folder /opt/ashah/
  3. Copy crunchify-script.sh file from Host1 to Host2 under folder /opt/ashah/
  4. Execute file crunchify-script.sh on remote host using ansible-playbook command.
  5. Get command line complete result back

Step-1

Create crunchify-script.sh file under /opt/ashah/ folder.

  • This script will cd into folder /opt/ashah/
  • Extract jdk 11.0.2  using tar -zxvf command
  • Setup JAVA_HOME once extraction is finished.

crunchify-script.sh

cd /opt/ashah/
tar -zxvf jdk-11.0.2_linux-x64_bin.tar.gz
export JAVA_HOME=/opt/ashah/jdk-11.0.2
export PATH=$JAVA_HOME/bin:$PATH

Step-2

Create .yml file for Ansible.

crunchify_execute_command.yml file

---
- name: Let's copy our executable script to remote location, execute script and get result back.
  remote_user: root
  sudo: yes
  hosts: crunchify-group
  tasks:
     - name: Transfer executable script script
       copy: src=/opt/ashah/crunchify-script.sh dest=/opt/ashah mode=0777

     - name: Execute the script
       command: sh /opt/ashah/crunchify-script.sh

Step-3

crunchify-hosts file which has list of all remote hosts.

crunchify-hosts file

#crunchify-group
[crunchify-group]
192.66.129.83

Step-4

Execute command ansible-playbook.

ansible-playbook -b -v -u root crunchify_execute_command.yml -kkkk --extra-vars "crunchify-group" -i crunchify-hosts

Here is a result:

root@localhost:/opt/ashah# ansible-playbook -b -v -u root crunchify_execute_command.yml -kkkk --extra-vars "crunchify-group" -i crunchify-hosts
Using /etc/ansible/ansible.cfg as config file
SSH password: 
/opt/ashah/crunchify-hosts did not meet host_list requirements, check plugin documentation if this is unexpected
/opt/ashah/crunchify-hosts did not meet script requirements, check plugin documentation if this is unexpected
[DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and make sure become_method is 'sudo' (default). This 
feature will be removed in version 2.9. Deprecation warnings can be disabled by setting deprecation_warnings=False in 
ansible.cfg.

PLAY [Transfer and execute a script.] *******************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************
ok: [192.66.129.83]

TASK [Transfer the script] ******************************************************************************************************
changed: [192.66.129.83] => {"changed": true, "checksum": "37dd2b7bd24c04fca7b7c436f299541a1f813f17", "dest": "/opt/ashah/crunchify-script.sh", "gid": 0, "group": "root", "md5sum": "140a200fbb7a12cbd6e1f57d3e14794f", "mode": "0777", "owner": "root", "size": 129, "src": "/root/.ansible/tmp/ansible-tmp-1553138812.28-91041260033433/source", "state": "file", "uid": 0}

TASK [Execute the script] *******************************************************************************************************
changed: [192.66.129.83] => {"changed": true, "cmd": ["sh", "/opt/ashah/crunchify-script.sh"], "delta": "0:00:02.713976", "end": "2019-03-21 03:26:56.151494", "rc": 0, "start": "2019-03-21 03:26:53.437518", "stderr": "", "stderr_lines": [], "stdout": "jdk-11.0.2/README.html\njdk-11.0.2/bin/jaotc\njdk-11.0.2/bin/jar\njdk-11.0.2/bin/jarsigner\njdk-11.0.2/bin/java\njdk-11.0.2/bin/jsted.certs", "jdk-11.0.2/lib/security/cacerts", "jdk-11.0.2/lib/security/default.policy", "jdk-11.0.2/lib/security/public_suffix_list.dat", "jdk-11.0.2/lib/server/Xusage.txt", "jdk-11.0.2/lib/server/libjsig.so", "jdk-11.0.2/lib/server/libjvm.so", "jdk-11.0.2/lib/src.zip", "jdk-11.0.2/lib/tzdb.dat", "jdk-11.0.2/release"]}

PLAY RECAP **********************************************************************************************************************
192.66.129.83                : ok=3    changed=2    unreachable=0    failed=0   

root@localhost:/opt/ashah#

Ansible Execute Command Result - Crunchify Tutorial

What’s next?

Check complete tutorial on how to copy Files on remote host with Ansible?

Ansible: How to copy File, Directory or Script from localhost to Remote host?

The post Ansible: How to Execute Commands on remote Hosts and get command result (log) back? appeared first on Crunchify.

WordPress Fastest Social Sharing Plugin (Crunchy Social) – Developed by team Crunchify

$
0
0

Crunchy Social Sharing WordPress Plugin

We are pleased to announce our brand new super fast WordPress plugin Crunchy Social.

It’s a time for all of us to rethink about Social Media sharing plugin. We have published an article last year about how to create social sharing buttons without any JavaScript and it was over night super hit. Article was shared more than millions of times over an internet and all social media blogging platforms.

We received more than hundred of request to create a plugin which doesn’t use any JavaScript and it’s superfast in loading 🙂

That’s a main reason, we have been working on creating super simple Social Sharing WordPress plugin and it’s ready for prime time now.

Crunchy Social – A Lightweight Social Sharing Plugin

Main idea behind Crunchy Social Sharing plugin is to keep simplicity in mind.

We are using Crunchy Social on all of our client sites and on Crunchify too. No messing around with code or your functions.php file.

Performance optimizations shouldn’t have to be complicated and so everything can be configured with a single click. Crunchy Social Sharing is created with performance in mind. Without making any query to Database, External API endpoint, Crunchy Social Sharing button loads in fraction of seconds.

Crunchy Social Sharing WordPress Plugin - Post Page Sharing Admin Panel Options

Crunchy Social Sharing WordPress Plugin - Floating Sharing Admin Panel Options.png

Crunchy Social Sharing WordPress Plugin - Mobile Bottom Sharing Admin Panel Options.png

Crunchy Social Sharing WordPress Plugin - Common Admin Panel Options.png

Features:

Here are some of the current features in the Crunchy Social plugin. And there are a lot more coming too!

  • Post/Page and any Custom Post Type (CPT) based social sharing button integration
  • Floating Social Sharing button
  • Mobile sticky bottom social sharing options
  • [ crunchy_social_sharing ] short code integration
  • Auto display on Post, Page, Media and Custom Post type
  • 100% responsive
  • Reorder Social icon order easily
  • Before content, After content or Adding top/bottom both options
  • Add text and align your custom text before social share icons
  • You have an option to add Pinterest Image incase there isn’t any featured image setup for a post/page
  • Twitter username for Twitter Sharing

List of Social Sharing Options:

  • Facebook
  • Twitter
  • Linkedin
  • Pinterest
  • WhatsApp
  • Buffer
  • Reddit
  • Tumblr
  • Mail
  • Pocket
  • Telegram
  • YCombinator
  • Print

Shortcode Option:

[ crunchy_social_sharing crunchy_social_sharing_option=’facebook,twitter,linkedin,whatsapp,pinterest’ twitter_username=’Crunchify’ icon_order=’fa,tw,ln,pi,wh’ ]

This is how shortcode works 🙂

More Examples:

Normal Post/Page Sharing Options

Crunchy Social Sharing Premium Plugin - Normal Post Buttons Options

Vertical Sharing View:

Crunchy Social Sharing - Floating Options

Mobile Sharing View:

Crunchy Social Sharing - iPhoneX - Mobile Layout

Support From the Developers

You get support directly from myself and our Crunchify Team. We don’t outsource anything.

Pricing

Because it takes time to develop the plugin, update it, and fix bugs we tried to keep the price as cheap as possible. Here are the pricing options:

  • $25 for 1 site license. Includes 1 year of support and updates.
  • $75 for 5 site license. Includes 1 year of support and updates.
  • $115 for 10 site license. Includes 1 year of support and updates.
  • $295 for 50 sites license. Includes 1 year of support and updates.

And yes, the plugin comes with a 14-day money back guarantee. For a limited time though, use our coupon code EARLY25 for 25% off! (offer expires on 31st March)

Visit Crunchy Social

We hope you enjoy the Crunchy Social plugin as much as we do! We have a lot of great new features already planned for it and hope to make it the #1 lightweight social sharing plugin for WordPress.

The post WordPress Fastest Social Sharing Plugin (Crunchy Social) – Developed by team Crunchify appeared first on Crunchify.

What is Lock(), UnLock(), ReentrantLock(), TryLock() and How it’s different from Synchronized Block in Java?

$
0
0

In this tutorial we will go over Lock(), UnLock(), ReentrantLock(), TryLock() and how it’s different from Synchronized Block in Java.

If you have also below questions then you are at right place.

  • Locks in Java
  • Java Lock Example and Concurrency Lock vs synchronized
  • Java Concurrency Tutorial – Reentrant Locks
  • synchronization – Proper lock/unlock usage for Java
  • java – Synchronization vs Lock
  • java lock unlock example
  • locking mechanism in java
  • java lock unlock different thread

Let’s get started. 1st let’s understand each of these terms and then we will go over working example.

Lock():

java.util.concurrent.locks. A lock is a thread synchronization mechanism like synchronized blocks except locks can be more sophisticated than Java’s synchronized blocks. It is an interfaces and classes providing a framework for locking and waiting for conditions that is distinct from built-in synchronization and monitors.

UnLock():

UnLock() releases the lock on Object.

ReentrantLock():

A ReentrantLock is owned by the thread last successfully locking, but not yet unlocking it. A thread invoking lock will return, successfully acquiring the lock, when the lock is not owned by another thread. The method will return immediately if the current thread already owns the lock.

TryLock():

TryLock() acquires the lock only if it is free at the time of invocation.

Tip1: If you’re simply locking an object, I’d prefer to use synchronized.

Lock.lock();
youMethod(); // Throws a NullPointerException!
Lock.unlock(); // Here you never release the lock!

Whereas with synchronized, it’s super clear and impossible to get wrong:

synchronized(myObject) {
      doSomethingNifty();
}

Example Details:

  1. Create class: CrunchifyLockTutorial.java
  2. Create inner classes: Company and CrunchifyLoop
  3. From Main create two objects of class Company
  4. Start thread loop for 10 on those objects
  5. While Company1 talks to Company2 – it locks an object. If at the same time – if Company2 wants to talk to Company1 then it says – Conflicting – Lock already exist. (Both companies are already in talk).

package crunchify.com.tutorial;

import java.util.Random;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

/**
 * @author Crunchify.com
 *
 */

public class CrunchifyLockTutorial {
	public static void main(String[] args) {
		final Company crunchify = new Company("Crunchify");
		final Company google = new Company("Google");
		new Thread(new CrunchifyLoop(crunchify, google)).start();
		new Thread(new CrunchifyLoop(google, crunchify)).start();
	}

	// Class CrunchifyLoop
	static class CrunchifyLoop implements Runnable {
		private Company companyName1;
		private Company companyName2;

		public CrunchifyLoop(Company companyName1, Company companyName2) {
			this.companyName1 = companyName1;
			this.companyName2 = companyName2;
		}

		public void run() {
			Random random = new Random();
			// Loop 10
			for (int counter = 0; counter <= 10; counter++) {
				try {
					Thread.sleep(random.nextInt(5));
				} catch (InterruptedException e) {
				}
				companyName2.crunchifyTalking(companyName1);
			}
		}
	}

	// Class Company
	static class Company {
		private final String companyName;

		// ReentrantLock: Creates an instance of ReentrantLock. This is equivalent to using ReentrantLock(false)
		private final Lock lock = new ReentrantLock();

		// Constructor
		public Company(String name) {
			this.companyName = name;
		}

		public String getName() {
			return this.companyName;
		}

		public boolean isTalking(Company companyName) {
			Boolean crunchifyLock = false;
			Boolean googleLock = false;
			try {
				// tryLock: Acquires the lock only if it is free at the time of invocation.
				crunchifyLock = lock.tryLock();
				googleLock = companyName.lock.tryLock();
			} finally {
				if (!(crunchifyLock && googleLock)) {
					if (crunchifyLock) {
						// unlock: Releases the lock.
						lock.unlock();
					}
					if (googleLock) {
						companyName.lock.unlock();
					}
				}
			}
			return crunchifyLock && googleLock;
		}

		public void crunchifyTalking(Company companyName) {
			// Check if Lock is already exist?
			if (isTalking(companyName)) {
				try {
					System.out.format("I'm %s: talking to %s %n", this.companyName, companyName.getName());
				} finally {
					lock.unlock();
					companyName.lock.unlock();
				}
			} else {
				System.out.format("\tLock Situation ==> I'm %s: talking to %s, but it seems"
						+ " we are already talking. Conflicting. %n", this.companyName, companyName.getName());
			}
		}
	}
}

Output:

I'm Crunchify: talking to Google 
	Lock Situation ==> I'm Google: talking to Crunchify, but it seems we are already talking. Conflicting. 
I'm Google: talking to Crunchify 
I'm Google: talking to Crunchify 
I'm Crunchify: talking to Google 
I'm Google: talking to Crunchify 
I'm Google: talking to Crunchify 
I'm Crunchify: talking to Google 
	Lock Situation ==> I'm Google: talking to Crunchify, but it seems we are already talking. Conflicting. 
	Lock Situation ==> I'm Crunchify: talking to Google, but it seems we are already talking. Conflicting. 
	Lock Situation ==> I'm Google: talking to Crunchify, but it seems we are already talking. Conflicting. 
I'm Crunchify: talking to Google 
I'm Google: talking to Crunchify 
I'm Google: talking to Crunchify 
I'm Crunchify: talking to Google 
I'm Google: talking to Crunchify 
	Lock Situation ==> I'm Google: talking to Crunchify, but it seems we are already talking. Conflicting. 
	Lock Situation ==> I'm Crunchify: talking to Google, but it seems we are already talking. Conflicting. 
I'm Crunchify: talking to Google 
I'm Crunchify: talking to Google 
I'm Crunchify: talking to Google 
I'm Crunchify: talking to Google

Tip2: You can achieve everything the utilities in java.util.concurrent do with the low-level primitives like synchronized, volatile, or wait.

The post What is Lock(), UnLock(), ReentrantLock(), TryLock() and How it’s different from Synchronized Block in Java? appeared first on Crunchify.

How to install and configure Filebeat? Lightweight Log Forwarder for Dev/Prod Environment

$
0
0

How to install and configure Filebeat - Lightweight Log Forwarder

Over last few years, I’ve been playing with Filebeat – it’s one of the best lightweight log/data forwarder for your production application.

Consider a scenario in which you have to transfer logs from one client location to central location for analysis. Splunk is one of the alternative to forward logs but it’s too costly. In my opinion it’s way too costly.

That’s where Filebeat comes into picture. It’s super light weight, simple, easy to setup, uses less memory and too efficient. Filebeat is a product of Elastic.co.

It’s Robust and Doesn’t Miss a Beat. It guarantees delivery of logs.

It’s ready of all types of containers:

With simple one liner command, Filebeat handles collection, parsing and visualization of logs from any of below environments:

  • Apache
  • NGINX
  • System
  • MySQL
  • Apache2
  • Auditd
  • Elasticsearch
  • haproxy
  • Icinga
  • IIS
  • Iptables
  • Kafka
  • Kibana
  • Logstash
  • MongoDB
  • Osquery
  • PostgreSQL
  • Redis
  • Suricata
  • Traefik
  • And more…

One of the best Lightweight log file shipper

Filebeat comes with internal modules (auditd, Apache, NGINX, System, MySQL, and more) that simplify the collection, parsing, and visualization of common log formats down to a single command.

How to Install Filebeat on Linux environment?

If you have any of below questions then you are at right place:

  • Getting Started With Filebeat
  • A Filebeat Tutorial: Getting Started
  • Install, Configure, and Use FileBeat – Elasticsearch
  • Filebeat setup and configuration example
  • How To Install Elasticsearch, Logstash?
  • How to Install Elastic Stack on Ubuntu?

Step-1) Installation

Download and extract Filebeat binary using below command.

Linux environment:

root@localhost:~# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-linux-x86_64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11.1M  100 11.1M    0     0  13.2M      0 --:--:-- --:--:-- --:--:-- 13.2M

root@localhost:~# tar xzvf filebeat-6.7.0-linux-x86_64.tar.gz

root@localhost:~# cd filebeat-6.7.0-linux-x86_64/

root@localhost:~/filebeat-6.7.0-linux-x86_64# pwd
/root/filebeat-6.7.0-linux-x86_64

root@localhost:~/filebeat-6.7.0-linux-x86_64# ls -ltra
total 36720
-rw-r--r--  1 root root    13675 Mar 21 14:30 LICENSE.txt
-rw-r--r--  1 root root   163444 Mar 21 14:30 NOTICE.txt
drwxr-xr-x  4 root root     4096 Mar 21 14:31 kibana
drwxr-xr-x  2 root root     4096 Mar 21 14:33 modules.d
drwxr-xr-x 21 root root     4096 Mar 21 14:33 module
-rw-r--r--  1 root root   146747 Mar 21 14:33 fields.yml
-rw-------  1 root root     7714 Mar 21 14:33 filebeat.yml
-rw-r--r--  1 root root    69996 Mar 21 14:33 filebeat.reference.yml
-rwxr-xr-x  1 root root 37161549 Mar 21 14:34 filebeat
-rw-r--r--  1 root root      802 Mar 21 14:35 README.md
-rw-r--r--  1 root root       41 Mar 21 14:35 .build_hash.txt
drwx------  9 root root     4096 Mar 30 13:46 ..
drwxr-xr-x  5 root root     4096 Mar 30 13:46 .

Mac Download:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-darwin-x86_64.tar.gz
tar xzvf filebeat-6.7.0-darwin-x86_64.tar.gz

RPM Download:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-x86_64.rpm
sudo rpm -vi filebeat-6.7.0-x86_64.rpm

Step-2) Configure filebeat.yml config file

Checkout filebeat.yml file. It’s filebeat configuration file.

Here is a simple file content.

root@localhost:~/filebeat-6.7.0-linux-x86_64# cat filebeat.yml 
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

Open filebeat.yml file and setup your log file location:

Open filebeat.yml file and setup your log file location

Step-3) Send log to ElasticSearch

Make sure you have started ElasticSearch locally before running Filebeat. I’ll publish an article later today on how to install and run ElasticSearch locally with simple steps.

Here is a filebeat.yml file configuration for ElasticSearch.

ElasticSearch runs on port 9200.

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

And you are all set.

Step-4) Run Filebeat

bash-3.2$ sudo chown root filebeat.yml 
bash-3.2$ sudo ./filebeat -e

Execute above two commands from filebeat root directory and you should see filebeat startup logs as below.

root@localhost:/user/crunchify/filebeat-6.6.2-linux-x86_64# sudo chown root filebeat.yml 
root@localhost:/user/crunchify/filebeat-6.6.2-linux-x86_64# sudo ./filebeat -e
2019-03-30T14:52:02.608Z	INFO	instance/beat.go:616	Home path: [/user/crunchify/filebeat-6.6.2-linux-x86_64] Config path: [/user/crunchify/filebeat-6.6.2-linux-x86_64] Data path: [/user/crunchify/filebeat-6.6.2-linux-x86_64/data] Logs path: [/user/crunchify/filebeat-6.6.2-linux-x86_64/logs]
2019-03-30T14:52:02.608Z	INFO	instance/beat.go:623	Beat UUID: da7e202d-d480-42df-907a-1073b19c8e2d
2019-03-30T14:52:02.609Z	INFO	[seccomp]	seccomp/seccomp.go:116	Syscall filter successfully installed
2019-03-30T14:52:02.609Z	INFO	[beat]	instance/beat.go:936	Beat info	{"system_info": {"beat": {"path": {"config": "/user/crunchify/filebeat-6.6.2-linux-x86_64", "data": "/user/crunchify/filebeat-6.6.2-linux-x86_64/data", "home": "/user/crunchify/filebeat-6.6.2-linux-x86_64", "logs": "/user/crunchify/filebeat-6.6.2-linux-x86_64/logs"}, "type": "filebeat", "uuid": "da7e202d-d480-42df-907a-1073b19c8e2d"}}}
2019-03-30T14:52:02.609Z	INFO	[beat]	instance/beat.go:945	Build info	{"system_info": {"build": {"commit": "1eea934ce81be553337f2828bd12131896fea8e4", "libbeat": "6.6.2", "time": "2019-03-06T14:17:59.000Z", "version": "6.6.2"}}}
2019-03-30T14:52:02.609Z	INFO	[beat]	instance/beat.go:948	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.10.8"}}}
2019-03-30T14:52:02.611Z	INFO	[beat]	instance/beat.go:952	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-01-15T18:44:58Z","containerized":false,"name":"localhost","ip":["127.0.0.1/8","::1/128","50.116.13.161/24","192.168.177.126/17","2600:3c01::f03c:91ff:fe17:4534/64","fe80::f03c:91ff:fe17:4534/64"],"kernel_version":"4.18.0-13-generic","mac":["f2:3c:91:17:45:34"],"os":{"family":"debian","platform":"ubuntu","name":"Ubuntu","version":"18.10 (Cosmic Cuttlefish)","major":18,"minor":10,"patch":0,"codename":"cosmic"},"timezone":"UTC","timezone_offset_sec":0,"id":"1182104d1089460dbcc0c94ff1954c8c"}}}
2019-03-30T14:52:02.611Z	INFO	[beat]	instance/beat.go:981	Process info	{"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/user/crunchify/filebeat-6.6.2-linux-x86_64", "exe": "/user/crunchify/filebeat-6.6.2-linux-x86_64/filebeat", "name": "filebeat", "pid": 20394, "ppid": 20393, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2019-03-30T14:52:01.740Z"}}}
2019-03-30T14:52:02.611Z	INFO	instance/beat.go:281	Setup Beat: filebeat; Version: 6.6.2
2019-03-30T14:52:05.613Z	INFO	add_cloud_metadata/add_cloud_metadata.go:319	add_cloud_metadata: hosting provider type not detected.
2019-03-30T14:52:05.614Z	INFO	elasticsearch/client.go:165	Elasticsearch url: http://localhost:9200
2019-03-30T14:52:05.615Z	INFO	[publisher]	pipeline/module.go:110	Beat name: localhost
2019-03-30T14:52:05.615Z	INFO	instance/beat.go:403	filebeat start running.
2019-03-30T14:52:05.615Z	INFO	registrar/registrar.go:134	Loading registrar data from /user/crunchify/filebeat-6.6.2-linux-x86_64/data/registry
2019-03-30T14:52:05.615Z	INFO	[monitoring]	log/log.go:117	Starting metrics logging every 30s
2019-03-30T14:52:05.616Z	INFO	registrar/registrar.go:141	States Loaded from registrar: 0
2019-03-30T14:52:05.616Z	INFO	crawler/crawler.go:72	Loading Inputs: 1
2019-03-30T14:52:05.616Z	INFO	log/input.go:138	Configured paths: [/crunchify/tutorials/log/crunchify-filebeat-test.log]
2019-03-30T14:52:05.616Z	INFO	input/input.go:114	Starting input of type: log; ID: 7740765267175828127 
2019-03-30T14:52:05.617Z	INFO	crawler/crawler.go:106	Loading and starting Inputs completed. Enabled inputs: 1
2019-03-30T14:52:05.617Z	INFO	cfgfile/reload.go:150	Config reloader started
2019-03-30T14:52:05.617Z	INFO	cfgfile/reload.go:205	Loading of config files completed.

Step-5) Result

Next step is for your to check who logs are coming to Elastic Search and how you are visualizing. We will go over detailed tutorial on that very soon. Stay tuned.

What’s next? Setup Elastic Search

How to Install and Configure Elasticsearch on your Dev/Production environment?

The post How to install and configure Filebeat? Lightweight Log Forwarder for Dev/Prod Environment appeared first on Crunchify.

How to Install and Configure Elasticsearch on your Dev/Production environment?

$
0
0

How to Install and Configure Elasticsearch on Linux environment

In this tutorial we will go over steps on how to install and configure Elasticsearch for your development and production environment.

What is ElasticSearch?

One of the best search and analytics engine out there in the world. Elasticsearch is a distributed, JSON-based engine designed for horizontal scalability, maximum reliability, and easy management.

Elastic search centrally stores your data so you can discover the expected and uncover the unexpected. You could send all your logs from to ElasticSearch via Filebeat and visualize metrics instantly.

How to Start ElasticSearch as normal user

If you have any of below questions then you are at right place:

  • How To Install and Configure Elasticsearch on Ubuntu 16.04
  • Elasticsearch Setup and Configuration
  • Installing and Configuring Elasticsearch
  • How to Install and configure a remote Elasticsearch instance

Step-1) Install Elasticsearch

Here are the few simple commands to install Elasticsearch on your Linux/Ubuntu OS.

bash-3.2$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz

bash-3.2$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz.sha512

bash-3.2$ shasum -a 512 -c elasticsearch-6.7.0.tar.gz.sha512 

bash-3.2$ tar -xzf elasticsearch-6.7.0.tar.gz

bash-3.2$ cd elasticsearch-6.7.0/

And that’s it. Here are installation logs.

Installation logs:

crunch@localhost:/$ cd tmp/

crunch@localhost:/tmp$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz
--2019-03-30 14:41:25--  https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz
Resolving artifacts.elastic.co (artifacts.elastic.co)... 2a04:4e42:a::734, 151.101.42.222
Connecting to artifacts.elastic.co (artifacts.elastic.co)|2a04:4e42:a::734|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 149006122 (142M) [application/x-gzip]
Saving to: ‘elasticsearch-6.7.0.tar.gz’

elasticsearch-6.7.0.tar.gz             100%[=========================================================================>] 142.10M   215MB/s    in 0.7s    

2019-03-30 14:41:26 (215 MB/s) - ‘elasticsearch-6.7.0.tar.gz’ saved [149006122/149006122]

crunch@localhost:/tmp$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz.sha512
--2019-03-30 14:41:26--  https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz.sha512
Resolving artifacts.elastic.co (artifacts.elastic.co)... 2a04:4e42:a::734, 151.101.42.222
Connecting to artifacts.elastic.co (artifacts.elastic.co)|2a04:4e42:a::734|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 156 [application/octet-stream]
Saving to: ‘elasticsearch-6.7.0.tar.gz.sha512’

elasticsearch-6.7.0.tar.gz.sha512      100%[=========================================================================>]     156  --.-KB/s    in 0s      

2019-03-30 14:41:26 (24.2 MB/s) - ‘elasticsearch-6.7.0.tar.gz.sha512’ saved [156/156]

crunch@localhost:/tmp$ shasum -a 512 -c elasticsearch-6.7.0.tar.gz.sha512 
elasticsearch-6.7.0.tar.gz: OK

crunch@localhost:/tmp$ tar -xzf elasticsearch-6.7.0.tar.gz

crunch@localhost:/tmp$ cd elasticsearch-6.7.0/

crunch@localhost:/tmp/elasticsearch-6.7.0$ ./bin/elasticsearch
warning: Falling back to java on path. This behavior is deprecated. Specify JAVA_HOME
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=1
[2019-03-30T14:42:15,073][INFO ][o.e.e.NodeEnvironment    ] [ZKsMkES] using [1] data paths, mounts [[/ (/dev/sda)]], net usable_space [69.8gb], net total_space [78.2gb], types [ext4]
[2019-03-30T14:42:15,079][INFO ][o.e.e.NodeEnvironment    ] [ZKsMkES] heap size [1007.3mb], compressed ordinary object pointers [true]
[2019-03-30T14:42:15,084][INFO ][o.e.n.Node               ] [ZKsMkES] node name derived from node ID [ZKsMkESwRL27iYEKUaBluQ]; set [node.name] to override
[2019-03-30T14:42:15,084][INFO ][o.e.n.Node               ] [ZKsMkES] version[6.7.0], pid[20094], build[default/tar/8453f77/2019-03-21T15:32:29.844721Z], OS[Linux/4.18.0-13-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13-Ubuntu-3ubuntu3.18.10.1]
[2019-03-30T14:42:15,084][INFO ][o.e.n.Node               ] [ZKsMkES] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-6051013812527326393, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.path.home=/tmp/elasticsearch-6.7.0, -Des.path.conf=/tmp/elasticsearch-6.7.0/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-03-30T14:42:17,459][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [aggs-matrix-stats]
[2019-03-30T14:42:17,460][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [analysis-common]
[2019-03-30T14:42:17,460][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [ingest-common]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [ingest-geoip]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [ingest-user-agent]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [lang-expression]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [lang-mustache]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [lang-painless]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [mapper-extras]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [parent-join]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [percolator]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [rank-eval]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [reindex]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [repository-url]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [transport-netty4]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [tribe]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-ccr]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-core]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-deprecation]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-graph]
[2019-03-30T14:42:17,490][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-ilm]
[2019-03-30T14:42:17,490][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-logstash]
[2019-03-30T14:42:17,490][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-ml]
[2019-03-30T14:42:17,490][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-monitoring]
[2019-03-30T14:42:17,496][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-rollup]
[2019-03-30T14:42:17,497][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-security]
[2019-03-30T14:42:17,497][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-sql]
[2019-03-30T14:42:17,498][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-upgrade]
[2019-03-30T14:42:17,499][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-watcher]
[2019-03-30T14:42:17,499][INFO ][o.e.p.PluginsService     ] [ZKsMkES] no plugins loaded
[2019-03-30T14:42:22,899][INFO ][o.e.x.s.a.s.FileRolesStore] [ZKsMkES] parsed [0] roles from file [/tmp/elasticsearch-6.7.0/config/roles.yml]
[2019-03-30T14:42:24,035][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [ZKsMkES] [controller/20173] [Main.cc@109] controller (64 bit): Version 6.7.0 (Build d74ae2ac01b10d) Copyright (c) 2019 Elasticsearch BV
[2019-03-30T14:42:24,565][DEBUG][o.e.a.ActionModule       ] [ZKsMkES] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-03-30T14:42:24,896][INFO ][o.e.d.DiscoveryModule    ] [ZKsMkES] using discovery type [zen] and host providers [settings]
[2019-03-30T14:42:25,908][INFO ][o.e.n.Node               ] [ZKsMkES] initialized
[2019-03-30T14:42:25,908][INFO ][o.e.n.Node               ] [ZKsMkES] starting ...
[2019-03-30T14:42:26,087][INFO ][o.e.t.TransportService   ] [ZKsMkES] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2019-03-30T14:42:26,128][WARN ][o.e.b.BootstrapChecks    ] [ZKsMkES] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-03-30T14:42:29,217][INFO ][o.e.c.s.MasterService    ] [ZKsMkES] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {ZKsMkES}{ZKsMkESwRL27iYEKUaBluQ}{YYShNSkJT7Ctc-LFBTrO8w}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4136235008, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-03-30T14:42:29,226][INFO ][o.e.c.s.ClusterApplierService] [ZKsMkES] new_master {ZKsMkES}{ZKsMkESwRL27iYEKUaBluQ}{YYShNSkJT7Ctc-LFBTrO8w}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4136235008, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {ZKsMkES}{ZKsMkESwRL27iYEKUaBluQ}{YYShNSkJT7Ctc-LFBTrO8w}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4136235008, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-03-30T14:42:29,315][INFO ][o.e.h.n.Netty4HttpServerTransport] [ZKsMkES] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2019-03-30T14:42:29,316][INFO ][o.e.n.Node               ] [ZKsMkES] started
[2019-03-30T14:42:29,322][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [ZKsMkES] Failed to clear cache for realms [[]]
[2019-03-30T14:42:29,404][INFO ][o.e.g.GatewayService     ] [ZKsMkES] recovered [0] indices into cluster_state
[2019-03-30T14:42:29,692][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.watches] for index patterns [.watches*]
[2019-03-30T14:42:29,730][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2019-03-30T14:42:29,783][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2019-03-30T14:42:29,818][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2019-03-30T14:42:29,866][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2019-03-30T14:42:29,892][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2019-03-30T14:42:29,957][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2019-03-30T14:42:29,991][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2019-03-30T14:42:30,116][INFO ][o.e.l.LicenseService     ] [ZKsMkES] license [319113dd-7fb5-4ae5-8355-e2c7458b1532] mode [basic] - valid

Step-2) Start Elasticsearch Process

You need to make sure JAVA_HOME is setup correctly.

crunch@localhost:/usr/lib/jvm/java-11-openjdk-amd64/bin$ export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/bin/
crunch@localhost:/usr/lib/jvm/java-11-openjdk-amd64/bin$ echo $JAVA_HOME
/usr/lib/jvm/java-11-openjdk-amd64/bin/

Start ElasticSearch process command:

./bin/elasticsearch

Make sure, you need to start Elastic Search using normal user. ElasticSearch won’t start as a root user and you will see below error if you try to run it as a root user.

java.lang.RuntimeException: can not run elasticsearch as root

Follow this tutorial on how to add non-root user and login.

Here is a console result output:

crunch@localhost:/tmp/elasticsearch-6.7.0$ ./bin/elasticsearch

warning: Falling back to java on path. This behavior is deprecated. Specify JAVA_HOME
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=1
[2019-03-30T14:42:15,073][INFO ][o.e.e.NodeEnvironment    ] [ZKsMkES] using [1] data paths, mounts [[/ (/dev/sda)]], net usable_space [69.8gb], net total_space [78.2gb], types [ext4]
[2019-03-30T14:42:15,079][INFO ][o.e.e.NodeEnvironment    ] [ZKsMkES] heap size [1007.3mb], compressed ordinary object pointers [true]
[2019-03-30T14:42:15,084][INFO ][o.e.n.Node               ] [ZKsMkES] node name derived from node ID [ZKsMkESwRL27iYEKUaBluQ]; set [node.name] to override
[2019-03-30T14:42:15,084][INFO ][o.e.n.Node               ] [ZKsMkES] version[6.7.0], pid[20094], build[default/tar/8453f77/2019-03-21T15:32:29.844721Z], OS[Linux/4.18.0-13-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13-Ubuntu-3ubuntu3.18.10.1]
[2019-03-30T14:42:15,084][INFO ][o.e.n.Node               ] [ZKsMkES] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-6051013812527326393, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.path.home=/tmp/elasticsearch-6.7.0, -Des.path.conf=/tmp/elasticsearch-6.7.0/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-03-30T14:42:17,459][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [aggs-matrix-stats]
[2019-03-30T14:42:17,460][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [analysis-common]
[2019-03-30T14:42:17,460][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [ingest-common]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [ingest-geoip]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [ingest-user-agent]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [lang-expression]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [lang-mustache]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [lang-painless]
[2019-03-30T14:42:17,487][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [mapper-extras]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [parent-join]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [percolator]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [rank-eval]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [reindex]
[2019-03-30T14:42:17,488][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [repository-url]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [transport-netty4]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [tribe]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-ccr]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-core]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-deprecation]
[2019-03-30T14:42:17,489][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-graph]
[2019-03-30T14:42:17,490][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-ilm]
[2019-03-30T14:42:17,490][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-logstash]
[2019-03-30T14:42:17,490][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-ml]
[2019-03-30T14:42:17,490][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-monitoring]
[2019-03-30T14:42:17,496][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-rollup]
[2019-03-30T14:42:17,497][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-security]
[2019-03-30T14:42:17,497][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-sql]
[2019-03-30T14:42:17,498][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-upgrade]
[2019-03-30T14:42:17,499][INFO ][o.e.p.PluginsService     ] [ZKsMkES] loaded module [x-pack-watcher]
[2019-03-30T14:42:17,499][INFO ][o.e.p.PluginsService     ] [ZKsMkES] no plugins loaded
[2019-03-30T14:42:22,899][INFO ][o.e.x.s.a.s.FileRolesStore] [ZKsMkES] parsed [0] roles from file [/tmp/elasticsearch-6.7.0/config/roles.yml]
[2019-03-30T14:42:24,035][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [ZKsMkES] [controller/20173] [Main.cc@109] controller (64 bit): Version 6.7.0 (Build d74ae2ac01b10d) Copyright (c) 2019 Elasticsearch BV
[2019-03-30T14:42:24,565][DEBUG][o.e.a.ActionModule       ] [ZKsMkES] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-03-30T14:42:24,896][INFO ][o.e.d.DiscoveryModule    ] [ZKsMkES] using discovery type [zen] and host providers [settings]
[2019-03-30T14:42:25,908][INFO ][o.e.n.Node               ] [ZKsMkES] initialized
[2019-03-30T14:42:25,908][INFO ][o.e.n.Node               ] [ZKsMkES] starting ...
[2019-03-30T14:42:26,087][INFO ][o.e.t.TransportService   ] [ZKsMkES] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2019-03-30T14:42:26,128][WARN ][o.e.b.BootstrapChecks    ] [ZKsMkES] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-03-30T14:42:29,217][INFO ][o.e.c.s.MasterService    ] [ZKsMkES] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {ZKsMkES}{ZKsMkESwRL27iYEKUaBluQ}{YYShNSkJT7Ctc-LFBTrO8w}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4136235008, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-03-30T14:42:29,226][INFO ][o.e.c.s.ClusterApplierService] [ZKsMkES] new_master {ZKsMkES}{ZKsMkESwRL27iYEKUaBluQ}{YYShNSkJT7Ctc-LFBTrO8w}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4136235008, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {ZKsMkES}{ZKsMkESwRL27iYEKUaBluQ}{YYShNSkJT7Ctc-LFBTrO8w}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4136235008, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-03-30T14:42:29,315][INFO ][o.e.h.n.Netty4HttpServerTransport] [ZKsMkES] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2019-03-30T14:42:29,316][INFO ][o.e.n.Node               ] [ZKsMkES] started
[2019-03-30T14:42:29,322][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [ZKsMkES] Failed to clear cache for realms [[]]
[2019-03-30T14:42:29,404][INFO ][o.e.g.GatewayService     ] [ZKsMkES] recovered [0] indices into cluster_state
[2019-03-30T14:42:29,692][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.watches] for index patterns [.watches*]
[2019-03-30T14:42:29,730][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2019-03-30T14:42:29,783][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2019-03-30T14:42:29,818][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2019-03-30T14:42:29,866][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2019-03-30T14:42:29,892][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2019-03-30T14:42:29,957][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2019-03-30T14:42:29,991][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ZKsMkES] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2019-03-30T14:42:30,116][INFO ][o.e.l.LicenseService     ] [ZKsMkES] license [319113dd-7fb5-4ae5-8355-e2c7458b1532] mode [basic] - valid

Step-3) Check Elasticsearchprocess process

How to make sure Elasticsearch is running?

command: ps -few | grep elastic

crunch@localhost:/tmp/elasticsearch-6.7.0$ ps -few | grep elastic
crunch   20305     1 99 14:46 pts/0    00:00:28 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-5628366226360196103 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.locale.providers=COMPAT -XX:UseAVX=2 -Des.path.home=/tmp/elasticsearch-6.7.0 -Des.path.conf=/tmp/elasticsearch-6.7.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /tmp/elasticsearch-6.7.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
crunch   20320 20305  0 14:46 pts/0    00:00:00 /tmp/elasticsearch-6.7.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
crunch   20362 20062  0 14:46 pts/0    00:00:00 grep --color=auto elastic

That’s it. You are all set running ElasticSearch.

Default elasticsearch startup file:

crunch@localhost:/tmp/elasticsearch-6.7.0/bin$ pwd
/tmp/elasticsearch-6.7.0/bin

crunch@localhost:/tmp/elasticsearch-6.7.0/bin$ vi elasticsearch

elasticsearch file content:

source "`dirname "$0"`"/elasticsearch-env

ES_JVM_OPTIONS="$ES_PATH_CONF"https://cdn.crunchify.com/jvm.options
JVM_OPTIONS=`"$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.launchers.JvmOptionsParser "$ES_JVM_OPTIONS"`
ES_JAVA_OPTS="${JVM_OPTIONS//\$\{ES_TMPDIR\}/$ES_TMPDIR} $ES_JAVA_OPTS"

cd "$ES_HOME"
# manual parsing to find out, if process should be detached
if ! echo $* | grep -E '(^-d |-d$| -d |--daemonize$|--daemonize )' > /dev/null; then
  exec \
    "$JAVA" \
    $ES_JAVA_OPTS \
    -Des.path.home="$ES_HOME" \
    -Des.path.conf="$ES_PATH_CONF" \
    -Des.distribution.flavor="$ES_DISTRIBUTION_FLAVOR" \
    -Des.distribution.type="$ES_DISTRIBUTION_TYPE" \
    -cp "$ES_CLASSPATH" \
    org.elasticsearch.bootstrap.Elasticsearch \
    "$@"
else
  exec \
    "$JAVA" \
    $ES_JAVA_OPTS \
    -Des.path.home="$ES_HOME" \
    -Des.path.conf="$ES_PATH_CONF" \
    -Des.distribution.flavor="$ES_DISTRIBUTION_FLAVOR" \
    -Des.distribution.type="$ES_DISTRIBUTION_TYPE" \
    -cp "$ES_CLASSPATH" \
    org.elasticsearch.bootstrap.Elasticsearch \
    "$@" \
    <&- &
  retval=$?
  pid=$!
  [ $retval -eq 0 ] || exit $retval
  if [ ! -z "$ES_STARTUP_SLEEP_TIME" ]; then
    sleep $ES_STARTUP_SLEEP_TIME
  fi
  if ! ps -p $pid > /dev/null ; then
    exit 1
  fi
  exit 0
fi

exit $?

What’s next? Setup Filebeat.

How to install and configure Filebeat? Lightweight Log Forwarder for Dev/Prod Environment

The post How to Install and Configure Elasticsearch on your Dev/Production environment? appeared first on Crunchify.

How to fix “java.lang.RuntimeException: can not run elasticsearch as root” Exception?

$
0
0

How to fix java.lang.RuntimeException -can not run elasticsearch as root Exception

Are you getting below exception while running ElasticSearch?

java.lang.RuntimeException: can not run elasticsearch as root

Why is it ElasticSearch is not allowed to run as root?

Elasticsearch is a process, which I believe has not need to access any system root features and can run easily without any of the the root privilege.

If you are running Elasticsearch on container, then only Container root process should run as a root like Docker and Kubernetes.

Here is a complete exception:

root@localhost:/user/crunchify/elasticsearch-6.7.0/bin# ./elasticsearch
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=1
[2019-03-30T18:44:11,186][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [unknown] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-6.7.0.jar:6.7.0]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-6.7.0.jar:6.7.0]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.7.0.jar:6.7.0]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.7.0.jar:6.7.0]
	at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.7.0.jar:6.7.0]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116) ~[elasticsearch-6.7.0.jar:6.7.0]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.7.0.jar:6.7.0]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
	at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:103) ~[elasticsearch-6.7.0.jar:6.7.0]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170) ~[elasticsearch-6.7.0.jar:6.7.0]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) ~[elasticsearch-6.7.0.jar:6.7.0]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-6.7.0.jar:6.7.0]
	... 6 more

How to fix this issue?

  • Add local user using command adduser.
  • Add user to sudoers file using command usermod.

root@localhost:/# adduser crunchify
Adding user `crunchify' ...
Adding new group `crunchify' (1001) ...
Adding new user `crunchify' (1001) with group `crunchify' ...
The home directory `/home/crunchify' already exists.  Not copying from `/etc/skel'.
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for crunchify
Enter the new value, or press ENTER for the default
	Full Name []: Crunchify
	Room Number []: 
	Work Phone []: 
	Home Phone []: 
	Other []: 
Is the information correct? [Y/n] Y

root@localhost:/# usermod -aG sudo crunchify

Now login as a newly created user:

root@localhost:/# sudo su crunchify

crunchify@localhost:/$

Try running Elasticsearch again and you should be all good.

Here is a complete tutorial on how to setup Elasticsearch.

How to Install and Configure Elasticsearch on your Dev/Production environment?

The post How to fix “java.lang.RuntimeException: can not run elasticsearch as root” Exception? appeared first on Crunchify.

How to create executable .jar file using Linux commands and without Eclipse Shortcut?

$
0
0

How to create executable Jar file using Linux commands and without Eclipse

So far for me it it was so easy to right click in Eclipse IDE and create executable .jar file with few simple clicks. Last week I had to create executable .jar file manually on my Digital Ocean node on which we hosts usually lots of services.

It took some time to create .jar file directly using only commands but finally after 10 minutes, executable .jar file was ready.

If you have any of below questions then you are at right place:

  • Best way to create a jar file in Linux command-line
  • How do I make a jar file executable in Linux?
  • How do I compile a jar file in Linux?
  • How to make a JAR file Linux executable?
  • How to Create and Execute a .Jar File in Linux Terminal?

Let’s get started:

Step-1) Make sure you have Java install on Linux system

If Java is already installed then go to step-4 directly. Try running command javac and if you see below result then there isn’t any Java install on host.

root@localhost:~/java/src# javac

Command 'javac' not found, but can be installed with:

apt install openjdk-11-jdk-headless
apt install default-jdk            
apt install openjdk-8-jdk-headless 
apt install ecj

Step-2) Install Java on linux host

root@localhost:~/java/src# apt install openjdk-11-jdk-headless
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  openjdk-11-demo openjdk-11-source
The following NEW packages will be installed:
  openjdk-11-jdk-headless
0 upgraded, 1 newly installed, 0 to remove and 92 not upgraded.
Need to get 217 MB of archives.
After this operation, 228 MB of additional disk space will be used.
Get:1 http://mirrors.linode.com/ubuntu cosmic-updates/main amd64 openjdk-11-jdk-headless amd64 11.0.1+13-3ubuntu3.18.10.1 [217 MB]
Fetched 217 MB in 3s (70.3 MB/s)                  
Selecting previously unselected package openjdk-11-jdk-headless:amd64.
(Reading database ... 107409 files and directories currently installed.)
Preparing to unpack .../openjdk-11-jdk-headless_11.0.1+13-3ubuntu3.18.10.1_amd64.deb ...
Unpacking openjdk-11-jdk-headless:amd64 (11.0.1+13-3ubuntu3.18.10.1) ...
Setting up openjdk-11-jdk-headless:amd64 (11.0.1+13-3ubuntu3.18.10.1) ...
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jar to provide /usr/bin/jar (jar) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jarsigner to provide /usr/bin/jarsigner (jarsigner) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javadoc to provide /usr/bin/javadoc (javadoc) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javap to provide /usr/bin/javap (javap) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jcmd to provide /usr/bin/jcmd (jcmd) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jdb to provide /usr/bin/jdb (jdb) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jdeprscan to provide /usr/bin/jdeprscan (jdeprscan) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jdeps to provide /usr/bin/jdeps (jdeps) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jimage to provide /usr/bin/jimage (jimage) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jinfo to provide /usr/bin/jinfo (jinfo) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jlink to provide /usr/bin/jlink (jlink) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jmap to provide /usr/bin/jmap (jmap) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jmod to provide /usr/bin/jmod (jmod) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jps to provide /usr/bin/jps (jps) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jrunscript to provide /usr/bin/jrunscript (jrunscript) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jshell to provide /usr/bin/jshell (jshell) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jstack to provide /usr/bin/jstack (jstack) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jstat to provide /usr/bin/jstat (jstat) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jstatd to provide /usr/bin/jstatd (jstatd) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/rmic to provide /usr/bin/rmic (rmic) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/serialver to provide /usr/bin/serialver (serialver) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jaotc to provide /usr/bin/jaotc (jaotc) in auto mode
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jhsdb to provide /usr/bin/jhsdb (jhsdb) in auto mode

Step-3) Verify on Java

Just try running below commands again and you should see installed java details.

root@localhost:~/crunchify/src/package# which java
/usr/bin/java

root@localhost:~/crunchify/src/package# java -version
openjdk version "11.0.1" 2018-10-16
OpenJDK Runtime Environment (build 11.0.1+13-Ubuntu-3ubuntu3.18.10.1)
OpenJDK 64-Bit Server VM (build 11.0.1+13-Ubuntu-3ubuntu3.18.10.1, mixed mode, sharing)

Step-4)

Here are the list of commands you need to execute in order to create executable .jar files.

root@localhost:~# pwd
/root

root@localhost:~# mkdir crunchify

root@localhost:~# cd crunchify/

root@localhost:~/crunchify# mkdir src

root@localhost:~/crunchify# mkdir src/package

root@localhost:~/crunchify# cd src/package/

root@localhost:~/crunchify/src/package# vi Crunchify.java

root@localhost:~/crunchify/src/package# cat Crunchify.java 
public class Crunchify 
{ 
    public static void main(String args[]) 
    { 
        System.out.println("\n\nHello there... /n This is an example to create executable .jar file using only commands...\n\n"); 
    } 
} 

root@localhost:~/crunchify/src/package# cd ..

root@localhost:~/crunchify/src# mkdir build

root@localhost:~/crunchify/src# mkdir build/classes

root@localhost:~/crunchify/src# javac -sourcepath src -d build/classes package/Crunchify.java

root@localhost:~/crunchify/src# cd package/

root@localhost:~/crunchify/src/package# cp Crunchify.java ../

root@localhost:~/crunchify/src/package# cd ..

root@localhost:~/crunchify/src# java -classpath build/classes/ Crunchify


Hello there... /n This is an example to create executable .jar file using only commands...


root@localhost:~/crunchify/src# echo Main-Class: Crunchify>myManifest
root@localhost:~/crunchify/src# jar cfm build/Crunchify.jar myManifest -C build/classes/ .
root@localhost:~/crunchify/src# java -jar build/Crunchify.jar


Hello there... /n This is an example to create executable .jar file using only commands...


root@localhost:~/crunchify/src# 

root@localhost:~/crunchify/src# cd build/

root@localhost:~/crunchify/src/build# ls -ltra
total 16
drwxr-xr-x 2 root root 4096 Mar 31 00:48 classes
drwxr-xr-x 4 root root 4096 Mar 31 00:52 ..
-rw-r--r-- 1 root root  830 Mar 31 00:52 Crunchify.jar
drwxr-xr-x 3 root root 4096 Mar 31 00:52 .

And you are all set.

As you see above, you have Crunchify.jar file created under /root/crunchify/src/build folder.

The post How to create executable .jar file using Linux commands and without Eclipse Shortcut? appeared first on Crunchify.


Everything about Java12 – New Features, Security and Switch Expression Statement (Examples)

$
0
0

Java 12 - All new stuff - Details by Crunchify

On 19th March 2019 Java12 was released. As we know Java12 was part of rapid release, it was released just in 6 months after Java11 release.

In this tutorial, we will go over all changes and new features about Java12.

Personally I switched to Java12 for all of my development but for production cycle it’s too early. Keep this tutorial bookmarked when you want to switch to Java12 for all of your production projects.

What’s new in Java12? New features in Java12:

There are quite a few internal and user workflow related features in Java12 which changed. Let’s take a look what is inside Java 12.

Change-1) Concurrent Class unloading

Normal Garbage Collector usually unloads unused variable during GC cycle and we usually notice some halt/pause in process, or CPU increase during that time. Usually we don’t even notice that.

With ZGC (Z Garbage Collector) – Java12 supports concurrent Class unloading too. As this happens during normal GC cycle, there isn’t any pause and no more memory extra usage too.

By default ZGC is enabled in Java12. No more action required 🙂

How to disable ZGC?

  • Just start your application with JVM command line argument -XX:-ClassUnloading

Change-2) Get more details on JVM Crash

When there is a OOM (Out Of Memory) error or JVM crashes, usually Java creates dump files with all details.

-XX:HeapDumpPath=/tmp/crunchify/ -XX:+HeapDumpOnOutOfMemoryError

With this JVM parameters, Dump files will be created under /tmp/crunchify/ folder on OOM error.

There is one more option added in Java12:

-XX:+ExtensiveErrorReports

New log file will be created named hs_err<pid>.log with all details about JVM crash. This is very helpful for your production environment if you are seeing frequent crash and want to debug more.

By default it’s disabled but you can enable extensive crash report by adding above JVM command line parameter.

Change-3) Compact Number Formatting

java.text adds support for compact Number format. 100o can be mentioned as 1K and 100000 can be mentioned as 100K.

package crunchify.com.tutorials;

import java.text.NumberFormat;
import java.util.Locale;

/**
 * @author Crunchify.com
 * Java12 Compact Number format example
 *
 */

public class CrunchifyJava12CompactNumber {
	public static void main(String args[]) {
		
		// NumberFormat is the abstract base class for all number formats.
		// This class provides the interface for formatting and parsing numbers. NumberFormat also provides methods for determining which locales have number formats, and what their names are.
		NumberFormat crunchifyFormat = NumberFormat.getCompactNumberInstance(Locale.US, NumberFormat.Style.SHORT);
		// getCompactNumberInstance returns a compact number format for the specified locale and formatStyle.
		
		String crunchifyResult = crunchifyFormat.format(100000);
		
		System.out.println("NumberFormat.Style.SHORT Result: "+crunchifyResult);
	}
}

Result:

NumberFormat.Style.SHORT Result: 100K

Change-4) Java Security Enhancements

security-libs/java.security changes:

  • disallow and allow Options for java.security.manager
    • if disallow then System.setSecurityManager can’t be used to set security manager.
  • -groupname Option Added to keytool Key Pair Generation
    • a user can specify a named group when generating a key pair.
  • Customizing PKCS12 keystore Generation
    • includes algorithms and parameters for
  • New JFR Security Events
    • What is JFR (Java Flight Recorder)
    • 4 new JFR events added
      • jdk.X509Certificate
      • jdk.X509Validation
      • jdk.TLSHandshake
      • jdk.SecurityPropertyModification

Change-5) JEP 325: Switch Expressions

JEP 325 Switch Expressions Tutorial by Crunchify

Enhanced Switch statement is now supported in Java12.

  • Java 12 based case L -> syntax operation. Here there isn’t any break necessary.
  • Use of Switch Expression
    • this is simplified switch statement
    • if a label is matched, then only the expression to the right of an arrow label is executed.
    • No break statement needed.

CrunchifyJava12SwitchExample.java

package crunchify.com.tutorials;

import java.util.Scanner;

/**
 * @author Crunchify.com
 * What's new in Java12 Switch statement?
 *
 */
public class CrunchifyJava12SwitchExample {
	public static void main(String[] args) {

		Scanner crunchifyObj = new Scanner(System.in);
		log("Enter company name from: Google, Facebook, PayPal, eBay, Twitter, LinkedIn, Apple");
		
		String company = crunchifyObj.nextLine();
		log("Selected Company: " + company);
		
		// Pre-Java12 Switch statement
		switch (company) {
			case "Google":
			case "Facebook":
			case "PayPal":
			case "eBay":
			case "Twitter":
				log("Pre-Java12: This switch is for companies Google, Facebook, PayPal, eBay & Twitter");
				break;
			case "":
			case "Apple":
			case "LinkedIn":
				log("Pre-Java12: This switch is for companies Apple & LinkedIn");
				break;
			default:
				log("Pre-Java12: Oops... Invalid company");
		}
		
		/**
		 * Java 12 based case L -> syntax operation.
		 * Here there isn't any break necessary.
		 */
		switch (company) {
			case "Google", "Facebook", "PayPal", "eBay", "Twitter" -> log("Java12: This switch is for companies Google, Facebook, PayPal, eBay & Twitter");
			case "Apple", "LinkedIn" -> log("Java12: This switch is for companies Apple & LinkedIn");
			default -> {
				log("Java12: Oops... Invalid company");
			}
		}
		
		/**
		 * This is switch expression
		 */
		final String companyName;
		companyName = switch (company) {
			case "Google", "Facebook", "PayPal", "eBay", "Twitter" -> ("Java12 Expression: This switch is for companies Google, Facebook, PayPal, eBay & Twitter");
			case "Apple", "LinkedIn" -> ("Java12 Expression: This switch is for companies Apple & LinkedIn");
			
			/**
			 * it's also possible to do switch operation without a block and break
 			 */
			default -> {
				break "Java12 Expression: Oops... Invalid company";
			}
		};
		
		log(companyName);
		
	}
	
	public static void log(String result) {
		System.out.println(result);
	}
	
}

IntelliJ IDEA Result:

Java 12 Switch Statement Tutorial Result - Crunchify

Enter company name from: Google, Facebook, PayPal, eBay, Twitter, LinkedIn, Apple
Twitter

Selected Company: Twitter

Pre-Java12: This switch is for companies Google, Facebook, PayPal, eBay & Twitter
Java12: This switch is for companies Google, Facebook, PayPal, eBay & Twitter
Java12 Expression: This switch is for companies Google, Facebook, PayPal, eBay & Twitter

Change-6) JVM Constants API

java.lang.invoke.constant: As you may know, Java class has constant pool which stores all operands at runtime.

Java12 adds API for invoking constants at runtime.

Removed features from Java12:

Removed features from Java12 - Crunchify Tips

Deprecated features from Java12:

Deprecated features from Java12 - Crunchify Tips

Let me know if you have any handy tutorial on Java12 which you would like to include here.

The post Everything about Java12 – New Features, Security and Switch Expression Statement (Examples) appeared first on Crunchify.

How to install & setup Apache Tomcat server on Linux Ubuntu host [on Linode]

$
0
0

Host Apache Tomcat Server on Linode Public Server

Setting up Apache Tomcat web server on publicly hosted Linux host is a best way to host your service.

If you want to publish any Java application to world, you need public IP and hence public facing host. Using that you could host your Java application on Tomcat and access using Public URL.

If you have any of below questions then you are at right place:

  • How to configure Tomcat to be accessible on internet?
  • How to make my IP publicly accessible to make my local Tomcat server public?
  • How to install Tomcat 9 on Ubuntu 18.10?
  • How do I start Tomcat in Ubuntu?
  • How do I start Tomcat in Linux?
  • How do I check if Java is installed on Ubuntu?
  • How to Install and Configure Apache Tomcat 9 on Ubuntu?

Let’s get started:

Step-1

  • Register for a host on Linode
  • Once login you will be redirected to https://cloud.linode.com/dashboard
  • Click on Create and Select Linode

Create Linode Step-1

Step-2

Next step is to provide all details for your Linode.

  • For Image: choose Ubuntu
  • For Region: choose US Dallas or your preferred region
  • For Linode Plan: Select Nanode which is just $5/month (1GB: 1 CPU, 25G Storage, 1G RAM)
  • Set Linode Label: crunchify
  • Set password
  • Click Create button

Linode Select Image, Region and Plan

Linode Set Password and Label

Step-3

From left panel click on Linodes and click on crunchify.

New Crunchify Linode Created

Step-4

Click on Networking Tab.

Login to Newly Created Linode from networking Tab

Open Terminal Window if you are on Mac OSX. For Windows you could use putty client.

use command ssh root@45.56.77.82 to login to your newly created host.

bash-3.2$ ssh root@45.56.77.82
Warning: Permanently added '45.56.77.82' (ECDSA) to the list of known hosts.
root@45.56.77.82's password: 
Welcome to Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Thu Jan 10 20:48:00 UTC 2019

  System load:  0.0               Processes:           93
  Usage of /:   8.6% of 24.06GB   Users logged in:     1
  Memory usage: 12%               IP address for eth0: 45.56.77.82
  Swap usage:   0%

 * MicroK8s is Kubernetes in a snap. Made by devs for devs.
   One quick install on a workstation, VM, or appliance.

   - https://bit.ly/microk8s

 * Full K8s GPU support is now available!

   - https://blog.ubuntu.com/2018/12/10/using-gpgpus-with-kubernetes


0 packages can be updated.
0 updates are security updates.

Step-5

Next step is to install Java/JDK. Just use below command to install JDK.

root@crunchify:~# sudo apt install default-jre

Verify if java is installed or not?

root@crunchify:~# which java
/usr/bin/java

root@crunchify:~# java -version
openjdk version "11.0.1" 2018-10-16
OpenJDK Runtime Environment (build 11.0.1+13-Ubuntu-2ubuntu1)
OpenJDK 64-Bit Server VM (build 11.0.1+13-Ubuntu-2ubuntu1, mixed mode, sharing)

Step-6

Install Apache Tomcat on Linux host. Follow below linux commands.

root@crunchify:~# cd /

root@crunchify:/# mkdir crunchify

root@crunchify:/# cd crunchify/

root@crunchify:/crunchify# wget http://apache.cs.utah.edu/tomcat/tomcat-9/v9.0.14/bin/apache-tomcat-9.0.14.zip

root@crunchify:/crunchify# apt install unzip

root@crunchify:/crunchify# unzip apache-tomcat-9.0.14.zip 

root@crunchify:/crunchify# chmod -R 777 apache-tomcat-9.0.14

root@crunchify:/crunchify# cd apache-tomcat-9.0.14/bin/

root@crunchify:/crunchify/apache-tomcat-9.0.14/bin# ./catalina.sh start -d
Using CATALINA_BASE:   /crunchify/apache-tomcat-9.0.14
Using CATALINA_HOME:   /crunchify/apache-tomcat-9.0.14
Using CATALINA_TMPDIR: /crunchify/apache-tomcat-9.0.14/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /crunchify/apache-tomcat-9.0.14/bin/bootstrap.jar:/crunchify/apache-tomcat-9.0.14/bin/tomcat-juli.jar
Tomcat started.

How to check if Tomcat process is up and running?

root@crunchify:/crunchify/apache-tomcat-9.0.14/bin# ps -few | grep tomcat
root     15854     1 11 20:58 pts/1    00:00:03 /usr/bin/java -Djava.util.logging.config.file=/crunchify/apache-tomcat-9.0.14/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /crunchify/apache-tomcat-9.0.14/bin/bootstrap.jar:/crunchify/apache-tomca-9.0.14/bin/tomcat-juli.jar -Dcatalina.base=/crunchify/apache-tomcat-9.0.14 -Dcatalina.home=/crunchify/apache-tomcat-9.0.14 -Djava.io.tmpdir=/crunchify/apache-tomcat-9.0.14/temp org.apache.catalina.startup.Bootstrap -d start

Apache Tomcat Process is up and Running - Crunchify Tips

Step-7

Now it’s time to verify Apache Tomcat Process using Browser URL.

By default Tomcat runs on port 8080. Just go to http://45.56.77.82:8080/ and you will your tomcat running on port 8080.

Host Apache Tomcat Server on Linode Public Server

And you are all set. Let me know if you face any issue running Tomcat Server on newly created Linode node.

The post How to install & setup Apache Tomcat server on Linux Ubuntu host [on Linode] appeared first on Crunchify.

How to Create Your Own Non-Blocking, Fixed Size Queue in Java? Same as EvictingQueue

$
0
0

Implement your own FixedSize, Non-Blocking Queue in Java - Crunchify

If you don’t want to use Google Guava’s EvictingQueue external dependency in your Java Enterprise Project then you should consider creating your own non-blocking, fixed size queue.

Basically we want avoid java.lang.IllegalStateException: Queue full exception, as you may have noticed in previous tutorial ArrayBlockingQueue Vs. EvictingQueue.

Exception Occurred: 
java.lang.IllegalStateException: Queue full
	at java.util.AbstractQueue.add(AbstractQueue.java:98)
	at java.util.concurrent.ArrayBlockingQueue.add(ArrayBlockingQueue.java:312)
	at crunchify.com.tutorial.CrunchifyArrayBlockingQueueVsEvictingQueue.CrunchifyArrayBlockingQueue(CrunchifyArrayBlockingQueueVsEvictingQueue.java:23)
	at crunchify.com.tutorial.CrunchifyArrayBlockingQueueVsEvictingQueue.main(CrunchifyArrayBlockingQueueVsEvictingQueue.java:41)

Do you also have below questions? Then you are at right place:

  • Is there a fixed sized queue which removes excessive elements
  • Static (Fixed Size) Singleton Queue
  • Is there a PriorityQueue implementation with fixed capacity

Let’s get started

  1. Create class CrunchifyNonBlockingFixedSizeQueue which extends class ArrayBlockingQueue
  2. Create constructor and initialize class with provided size
  3. @Override add(E e) operation with our own implementation
    • Before adding element check if we reached max size
    • if (max size), remove element from head
    • then add new element

Complete implementation

package crunchify.com.tutorial;

import java.util.concurrent.ArrayBlockingQueue;

/**
 * @author Crunchify.com Feel free to use this in your Enterprise Java Project
 */

public class CrunchifyNonBlockingFixedSizeQueue<E> extends ArrayBlockingQueue<E> {

	/**
	 * generated serial number
	 */
	private static final long serialVersionUID = -7772085623838075506L;

	// Size of the queue
	private int size;

	// Constructor
	public CrunchifyNonBlockingFixedSizeQueue(int crunchifySize) {

		// Creates an ArrayBlockingQueue with the given (fixed) capacity and default access policy
		super(crunchifySize);
		this.size = crunchifySize;
	}

	// If queue is full, it will remove oldest/first element from queue like FIFO
	// Do we need this add() method synchronize? What do you think?
	@Override
	synchronized public boolean add(E e) {

		// Check if queue full already?
		if (super.size() == this.size) {
			// remove element from queue if queue is full
			this.remove();
		}
		return super.add(e);
	}

}

How to test?

package crunchify.com.tutorial;

import java.util.concurrent.ArrayBlockingQueue;

/**
 * @author Crunchify.com
 * 
 */

public class CrunchifyNonBlockingFixedSizeQueueTest {

	public static void main(String[] args) {

		// Test ArrayBlockingQueue with size 10
		CrunchifyOwnNonBlockingFixedSizeQueue();
	}

	private static void CrunchifyOwnNonBlockingFixedSizeQueue() {

		// crunchifyQueue with type CrunchifyNonBlockingFixedSizeQueue
		ArrayBlockingQueue<String> crunchifyQueue = new CrunchifyNonBlockingFixedSizeQueue<String>(10);

		String crunchifyMsg = "This is CrunchifyNonBlockingFixedSizeQueueTest - ";
		try {
			// We are looping for 15 times - No error even after queue is full
			for (int i = 1; i <= 15; i++) {
				crunchifyQueue.add(crunchifyMsg + i);
				log("CrunchifyNonBlockingFixedSizeQueueTest size: " + crunchifyQueue.size());
			}
		} catch (Exception e) {
			log("\nException Occurred: ");
			e.printStackTrace();
		}

	}

	private static void log(String crunchifyText) {
		System.out.println(crunchifyText);

	}
}

Result: no queue full error

CrunchifyNonBlockingFixedSizeQueueTest size: 1
CrunchifyNonBlockingFixedSizeQueueTest size: 2
CrunchifyNonBlockingFixedSizeQueueTest size: 3
CrunchifyNonBlockingFixedSizeQueueTest size: 4
CrunchifyNonBlockingFixedSizeQueueTest size: 5
CrunchifyNonBlockingFixedSizeQueueTest size: 6
CrunchifyNonBlockingFixedSizeQueueTest size: 7
CrunchifyNonBlockingFixedSizeQueueTest size: 8
CrunchifyNonBlockingFixedSizeQueueTest size: 9
CrunchifyNonBlockingFixedSizeQueueTest size: 10
CrunchifyNonBlockingFixedSizeQueueTest size: 10   <== No queue full error
CrunchifyNonBlockingFixedSizeQueueTest size: 10
CrunchifyNonBlockingFixedSizeQueueTest size: 10
CrunchifyNonBlockingFixedSizeQueueTest size: 10
CrunchifyNonBlockingFixedSizeQueueTest size: 10

As you may have noticed here – we have synchronized add(E e) method in our utility. Do you think we should do it? What if there are more than one threads trying to add elements and queue is full?

The post How to Create Your Own Non-Blocking, Fixed Size Queue in Java? Same as EvictingQueue appeared first on Crunchify.

How to Style and Customize WordPress Comment Form? Plus, Modify appearance of Comments with CSS

$
0
0

How to Customize WordPress Comment Form - Crunchify Tips

comment_form() outputs a complete commenting form for use within a WordPress template.

Most strings and form fields may be controlled through the $args array passed into the function, while you may also choose to use the comment_form_default_fields filter to modify the array of default fields if you’d just like to add a new one or remove a single field.

All fields are also individually passed through a filter of the form comment_form_field_$name where $name is the key used in the array of fields.

The WordPress 3.0+ function – comment_form() has 2 parameters that can be optionally modified to your liking.

Here is the example arguments that can be used:

<?php comment_form($args, $post_id); ?>

  • $args: This contains our options for our strings and fields within the form and etc.
  • $post_id: Post ID is used to generate the form, if null it will use the current post.

I’ve today modified comment form and added some CSS and noticed very big difference in my comment form.

Would like to share my changes with you.

Let’s get started:

Method-1) Using functions.php file – only for Genesis

  • Go to Appearance
  • Click Editor
  • Open functions.php file and put below code.

This is what I do have on Crunchify as I’m using Genesis WordPress framework theme.

// Modify comments header text in comments
add_filter( 'genesis_title_comments', 'child_title_comments');
function child_title_comments() {
    return __(comments_number( '<h3>No Responses</h3>', '<h3>1 Response</h3>', '<h3>% Responses...</h3>' ), 'genesis');
}

// Unset URL from comment form
function crunchify_move_comment_form_below( $fields ) { 
    $comment_field = $fields['comment']; 
    unset( $fields['comment'] ); 
    $fields['comment'] = $comment_field; 
    return $fields; 
} 
add_filter( 'comment_form_fields', 'crunchify_move_comment_form_below' ); 

// Add placeholder for Name and Email
function modify_comment_form_fields($fields){
    $fields['author'] = '<p class="comment-form-author">' . '<input id="author" placeholder="Your Name (No Keywords)" name="author" type="text" value="' .
                esc_attr( $commenter['comment_author'] ) . '" size="30"' . $aria_req . ' />'.
                '<label for="author">' . __( 'Your Name' ) . '</label> ' .
                ( $req ? '<span class="required">*</span>' : '' )  .
                '</p>';
    $fields['email'] = '<p class="comment-form-email">' . '<input id="email" placeholder="your-real-email@example.com" name="email" type="text" value="' . esc_attr(  $commenter['comment_author_email'] ) .
                '" size="30"' . $aria_req . ' />'  .
                '<label for="email">' . __( 'Your Email' ) . '</label> ' .
                ( $req ? '<span class="required">*</span>' : '' ) 
                 .
                '</p>';
    $fields['url'] = '<p class="comment-form-url">' .
             '<input id="url" name="url" placeholder="http://your-site-name.com" type="text" value="' . esc_attr( $commenter['comment_author_url'] ) . '" size="30" /> ' .
            '<label for="url">' . __( 'Website', 'domainreference' ) . '</label>' .
               '</p>';
    
    return $fields;
}
add_filter('comment_form_default_fields','modify_comment_form_fields');

Method-2) For any other WordPress theme

Just open comments.php file and replace $args with below code to beautify comment code with placeholders.

$args = array(
    'fields' => apply_filters(
        'comment_form_default_fields', array(
            'author' =>'<p class="comment-form-author">' . '<input id="author" placeholder="Your Name (No Keywords)" name="author" type="text" value="' .
                esc_attr( $commenter['comment_author'] ) . '" size="30"' . $aria_req . ' />'.
                '<label for="author">' . __( 'Your Name' ) . '</label> ' .
                ( $req ? '<span class="required">*</span>' : '' )  .
                '</p>'
                ,
            'email'  => '<p class="comment-form-email">' . '<input id="email" placeholder="your-real-email@example.com" name="email" type="text" value="' . esc_attr(  $commenter['comment_author_email'] ) .
                '" size="30"' . $aria_req . ' />'  .
                '<label for="email">' . __( 'Your Email' ) . '</label> ' .
                ( $req ? '<span class="required">*</span>' : '' ) 
                 .
                '</p>',
            'url'    => '<p class="comment-form-url">' .
             '<input id="url" name="url" placeholder="http://your-site-name.com" type="text" value="' . esc_attr( $commenter['comment_author_url'] ) . '" size="30" /> ' .
            '<label for="url">' . __( 'Website', 'domainreference' ) . '</label>' .
               '</p>'
        )
    ),
    'comment_field' => '<p class="comment-form-comment">' .
        '<label for="comment">' . __( 'Let us know what you have to say:' ) . '</label>' .
        '<textarea id="comment" name="comment" placeholder="Express your thoughts, idea or write a feedback by clicking here & start an awesome comment" cols="45" rows="8" aria-required="true"></textarea>' .
        '</p>',
    'comment_notes_after' => '',
    'title_reply' => '<div class="crunchify-text"> <h5>Please Post Your Comments & Reviews</h5></div>'
);

To customize comment form, you can use any HTML tags/elements as you can see I’ve placed extra placeholder html tag above.

Comment form – Before:

Crunchify-WordPress-Comment-Form-Magaziene-Premium-Before

Comment form – After:

Updated WordPress Comment form after modified CSS and Field arguments

There are number of different ways you could modify comment form. Just keep adding different texts and html Styles to modify it.

Now what? Do you want to modify CSS with below look and feel?

Modify WordPress comments Look and Feel with CSS

If you want to modify CSS of your comment form then here is a handy code which you could add to your theme’s style.css file.

/* ## Comments
--------------------------------------------- */
.comment-respond,
.entry-pings,
.entry-comments {
    color: #444;
    padding: 20px 45px 40px 45px;
    border: 1px solid #ccc;
    overflow: hidden;
    background: #fff;
    -webkit-box-shadow: 0px 0px 8px rgba(0,0,0,0.3);
    -moz-box-shadow: 0px 0px 8px rgba(0,0,0,0.3);
    box-shadow: 0px 0px 8px rgba(0,0,0,0.3);
    border-left: 4px solid #444;
}
.entry-comments h3{
    font-size: 30px;
    margin-bottom: 30px;
}
.comment-respond h3,
.entry-pings h3{
    font-size: 20px;
    margin-bottom: 30px;
}
.comment-respond {
    padding-bottom: 5%;
    margin: 20px 1px 20px 1px;
        border-left: none !important;
}
.comment-header {
    color: #adaeb3;
    font-size: 14px;
    margin-bottom: 20px;
}
.comment-header cite a {
    border: none;
    font-style: normal;
    font-size: 16px;
    font-weight: bold;
}
.comment-header .comment-meta a {
    border: none;
    color: #adaeb3;
}
li.comment {
    background-color: #fff;
    border-right: none;
}
.comment-content {
    clear: both;
    overflow: hidden;
}
.comment-list li {
    font-size: 14px;
    padding: 20px 30px 20px 50px;
}
.comment-list .children {
    margin-top: 40px;
    border: 1px solid #ccc;
}
.comment-list li li {
    background-color: #f5f5f6;
}
.comment-list li li li {
    background-color: #fff;
}
.comment-respond input[type="email"],
.comment-respond input[type="text"],
.comment-respond input[type="url"] {
    width: 50%;
}
.comment-respond label {
    display: block;
    margin-right: 12px;
}
.entry-comments .comment-author {
    margin-bottom: 0;
    position: relative;
}
.entry-comments .comment-author img {
    border-radius: 50%;
    border: 5px solid #fff;
    left: -80px;
    top: -5px;
    position: absolute;
    width: 60px;
}
.entry-pings .reply {
    display: none;
}
.bypostauthor {
}
.form-allowed-tags {
    background-color: #f5f5f5;
    font-size: 16px;
    padding: 24px;
}
.comment-reply-link{
    cursor: pointer;
    background-color: #444;
    border: none;
    border-radius: 3px;
    color: #fff;
    font-size: 12px;
    font-weight: 300;
    letter-spacing: 1px;
    padding: 4px 10px 4px;
    text-transform: uppercase;
    width: auto;
}
.comment-reply-link:hover{
    color: #fff;
}
.comment-notes{
    display:none;   
}

We are using currently Disqus comment plugin. So far we like it & will continue to use it.

The post How to Style and Customize WordPress Comment Form? Plus, Modify appearance of Comments with CSS appeared first on Crunchify.

JVM Tuning: Heapsize, Stacksize and Garbage Collection Fundamental

$
0
0

java-jvm-tuning-crunchify-tips

Heap Size:

When a Java program starts, Java Virtual Machine gets some memory from Operating System. Java Virtual Machine or JVM uses this memory for all its need and part of this memory is call java heap memory. Whenever we create object using new operator or by any another means object is allocated memory from Heap and When object dies or garbage collected, memory goes back to Heap space in Java.

Tutorial on Increase Eclipse Memory Size to avoid OOM on Startup.

JVM option Meaning
-Xms initial java heap size
-Xmx maximum java heap size
-Xmn the size of the heap for the young generation

It is good practice for big production projects to set the minimum -Xms and maximum -Xmx heap sizes to the same value.

For efficient garbage collection, the -Xmn value should be lower than the -Xmx value. Heap size does not determine the amount of memory your process uses.

If you monitor your java process with an OS tool like top or task manager, you may see the amount of memory you use exceed the amount you have specified for -Xmx. -Xmx limits the java heap size, java will allocate memory for other things, including a stack for each thread. It is not unusual for the total memory consumption of the VM to exceed the value of -Xmx.

Stack Size:

Each thread in the VM get’s a stack. The stack size will limit the number of threads that you can have, too big of a stack size and you will run out of memory as each thread is allocated more memory than it needs.

JVM option Meaning
-Xss the stack size for each thread

-Xss determines the size of the stack: Xss1024k. If the stack space is too small, eventually you will see an exception class java.lang.StackOverflowError.

Garbage Collection:

There are essentially two GC threads running. One is a very lightweight thread which does “little” collections primarily on the Young generation of the heap. The other is the Full GC thread which traverses the entire heap when there is not enough memory left to allocate space for objects which get promoted from the Young to the older generation(s).

If there is a memory leak or inadequate heap allocated, eventually the older generation will start to run out of room causing the Full GC thread to run (nearly) continuously.

Since this process stops the world, Java application won’t be able to respond to requests and they’ll start to back up or OOM.

The amount allocated for the Young(Eden) generation is the value specified with -Xmn. The amount allocated for the older generation is the value of -Xmx minus the -Xmn.

Generally, you don’t want the Eden to be too big or it will take too long for the GC to look through it for space that can be reclaimed.

The post JVM Tuning: Heapsize, Stacksize and Garbage Collection Fundamental appeared first on Crunchify.

Ads.txt (Authorized Digital Sellers) file and Google Adsense Revenue [All in one Guide]

$
0
0

Ads.txt complete guide - Crunchify Tips

Let me start with my recent experience with Ads.txt file. To be honest, I just came to know about it last week. It seems Ads is a short-name of Authorized Digital Sellers.

It’s a new standard imposed by Google and other Advertising companies around November 2017.

Let’s first understand what is ads.txt file

Consider this scenario. You are owning a blog/site and you are allowing advertiser to run ads on your site. But what if your site is hacked and somebody else is running ads on your site and you are completely unaware of that.

Your site’s revenue is going to their account rather than yours.

That’s where ads.txt file come into picture. As name suggests it allows your to Authorize specific Digital Sellers. Only DIRECT or RESELLER publishers are allowed to place ads on your site which you specified in your ads.txt file.

Here are some ads.txt files for your reference:

  • https://mashable.com/ads.txt
  • https://www.searchenginejournal.com/ads.txt
  • https://crunchify.com/ads.txt

This system creates more transparent ecosystem around digital ads automatically.

What if you are running Google Adsense Ads?

Google is not mandating ads.txt file currently. That means, publishers are not required to put ads.txt file into their root domain. In other words, if you don’t have ads.txt under your site’s root domain that means you are allowing any publishers to run ads on your site. There won’t be any verification performed on your site.

It’s like catch all block in Java. Allow everything 🙂

But there is a big if – if you have ads.txt file then it’s absolutely required for you to put your Google Adsense Publisher ID into it.

Google will warn you in case you miss putting Adsense publisher ID into it into their admin console. Adsense system will also send you emails with more details.

Here are the screenshot and message:

If you have ads.txt file and you forgot to put your Google Adsense Publisher ID.

Error Message: Earnings at risk – One or more of your ads.txt files doesn’t contain your AdSense publisher ID. Fix this now to avoid severe impact to your revenue.

Earnings at risk - One or more of your ads.txt files doesn't contain your AdSense publisher ID

Email with more details:

Potential revenue decrease if no action taken

How to fix Google Adsense ads.txt error?

As you see in above email just add a single line to your ads.txt file

google.com, pub-7816438xxxxxxxxx, DIRECT, f08c47fec0942fa0

Without this line Adsense will stop serving ads on your site. Once you update your ads.txt file then it will take upto 24 hrs for system to start serving ads again.

Current ads.txt project version 1.0.1 details on official site: Link.

I hope you detailed idea on how to setup your ads.txt file right way to stop loosing your revenue from Google Adsense and other Advertisers. It’s an easy process and I would recommend everybody to implement ads.txt file.

Are you using any other Ad network?

Please follow other Ad network’s detail page on how to implement and add more lines to your ads.txt file.

Let me know if your revenue increase after white listing digital seller. Also, if your Adsense account is disabled then share screenshot and i’ll take a look into details.

The post Ads.txt (Authorized Digital Sellers) file and Google Adsense Revenue [All in one Guide] appeared first on Crunchify.

How to Start Stop Apache Tomcat via Command Line? Check if Tomcat is already running and Kill command

$
0
0

How to Check if Tomcat is already running

Apache Tomcat (or simply Tomcat) is an open source web server and servlet container developed by the Apache Software Foundation (ASF). Tomcat implements the Java Servlet and the JavaServer Pages (JSP) specifications from Oracle Corporation, and provides a “pure Java” HTTP web server environment for Java code to run.

If you have any of below questions then you are at right place:

  • Several ports (8080, 8081, 8082) required by Tomcat Server at localhost are already in use
  • Tomcat Server Error – Port 8080 already in use
  • port 8080 required is in use
  • port 8080 already in use eclipse
  • how to stop port 8080 in windows

I’ve setup tomcat as Windows Service. Running Tomcat as a Windows Service provides a number of benefits that are essential when moving from a development set-up to a production environment.

Benefit-1) Setup reliable automatic startup on boot

  • Essential in an environment where you may want to remotely reboot a Java System after maintenance without worrying about whether your server will come back online.

Benefit-2) Setup Tomcat server startup without active user login

  • In a data center, it is not reasonable to expect an active login from the system just to run Tomcat. In fact, Tomcat is often run on blade servers that may not even have an active monitor connected to them. Windows Services are owned by the System, and can be started without an active user.

Benefit-3) Better Security

Recently I wanted to start/stop my Tomcat Server via command line as wanted to create quick shall script to do it. Official documentation provided below commands in the form of //XX// ServiceName
Apache Tomcat Startup Scripts - Crunchify

Available command line options are:

  • //TS// Run the service as console application This is the default operation. It is called if the no option is provided. The ServiceName is the name of the executable without exe suffix, meaning Tomcat6
  • //RS// Run the service Called only from ServiceManager
  • //SS// Stop the service
  • //US// Update service parameters
  • //IS// Install service
  • //DS// Delete service Stops the service if running

But rather doing it this way I found below commands very useful and simple.

1) Windows (if Tomcat is setup as Windows Service)

  • To Start server: <Tomcat Root>/bin>Tomcat8.exe start
  • To Stop server: <Tomcat Root>/bin>Tomcat8.exe stop

2) Windows (if you have downloaded binaries as .zip)

  • To Start server: <Tomcat Root>/bin>catalina.bat start
  • To Stop server: <Tomcat Root>/bin>catalina.bat stop

3) Mac/Linux/Unix (if you have downloaded binaries as .zip)

  • To Start server: <Tomcat Root>/bin>./catalina.sh start
  • To Stop server: <Tomcat Root>/bin>./catalina.sh stop

Below are all catalina.sh command parameters:

Usage: catalina.sh ( commands ... )
commands:
  debug             Start Catalina in a debugger
  debug -security   Debug Catalina with a security manager
  jpda start        Start Catalina under JPDA debugger
  run               Start Catalina in the current window
  run -security     Start in the current window with security manager
  start             Start Catalina in a separate window
  start -security   Start in a separate window with security manager
  stop              Stop Catalina, waiting up to 5 seconds for the process to end
  stop n            Stop Catalina, waiting up to n seconds for the process to end
  stop -force       Stop Catalina, wait up to 5 seconds and then use kill -KILL if still running
  stop n -force     Stop Catalina, wait up to n seconds and then use kill -KILL if still running
  configtest        Run a basic syntax check on server.xml - check exit code for result
  version           What version of tomcat are you running?

Startup Screenshot:

Tomcat Server started - Crunchify Tips

How to check if Tomcat is already running and kill existing tomcat process.

Step-1) Find out the process using command ps -ef | grep tomcat

bash-3.2$ ps -ef | grep tomcat
  502 56188     1   0  7:31PM ttys001    0:04.23 /Library/Java/JavaVirtualMachines/jdk1.8.0_51.jdk/Contents/Home/bin/java -Djava.util.logging.config.file=/Users/appshah/Downloads/apache-tomcat-8.5.4/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -classpath /Users/appshah/Downloads/apache-tomcat-8.5.4/bin/bootstrap.jar:/Users/appshah/Downloads/apache-tomcat-8.5.4/bin/tomcat-juli.jar -Dcatalina.base=/Users/appshah/Downloads/apache-tomcat-8.5.4 -Dcatalina.home=/Users/appshah/Downloads/apache-tomcat-8.5.4 -Djava.io.tmpdir=/Users/appshah/Downloads/apache-tomcat-8.5.4/temp org.apache.catalina.startup.Bootstrap start
  502 56618 55587   0  7:34PM ttys001    0:00.00 grep tomcat

Here 2nd column value is a process ID. In our case it’s 56188.

You could visit link http://localhost:8080 and you should see welcome page.

Welcome Tomcat Page - Crunchify

Step-2) Kill process using command kill -9 <process ID>

bash-3.2$ kill -9 56188

Here, 56188 is a process ID which we got it from step-1.

Now, link http://localhost:8080/ shouldn’t be working for you.

The post How to Start Stop Apache Tomcat via Command Line? Check if Tomcat is already running and Kill command appeared first on Crunchify.


How to install Boto3 and set Amazon EC2 Keys? Boto: A Python interface SDK for Amazon Web Services

$
0
0

How to install Boto3 on Mac - Amazon AWS SDK

What is Boto?

Boto is an Amazon AWS SDK for python. Ansible internally uses Boto to connect to Amazon EC2 instances and hence you need Boto library in order to run Ansible on your laptop/desktop.

Recently I started playing with Amazon EC2 and wanted to start, stop Amazon EC2 instances using command line.

One of the requirement for you to install Amazon CLI (Command Line Interface) is to install Boto on your system. I’m using Macbook Pro 13″ for all of my development.

In this tutorial we will go over steps on how to install Boto and Boto3 on MacOS.

Here is a command: pip install boto3 --user

bash1.2 $ pip install boto3 --user
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
Collecting boto3
  Downloading https://files.pythonhosted.org/packages/34/53/e7953f300d345f8b95a578085aba17bc7145f913b32e1f00f9a105602851/boto3-1.9.143-py2.py3-none-any.whl (128kB)
    100% || 133kB 808kB/s 
Collecting s3transfer<0.3.0,>=0.2.0 (from boto3)
  Using cached https://files.pythonhosted.org/packages/d7/de/5737f602e22073ecbded7a0c590707085e154e32b68d86545dcc31004c02/s3transfer-0.2.0-py2.py3-none-any.whl
Collecting jmespath<1.0.0,>=0.7.1 (from boto3)
  Using cached https://files.pythonhosted.org/packages/83/94/7179c3832a6d45b266ddb2aac329e101367fbdb11f425f13771d27f225bb/jmespath-0.9.4-py2.py3-none-any.whl
Collecting botocore<1.13.0,>=1.12.143 (from boto3)
  Downloading https://files.pythonhosted.org/packages/e0/9a/400c9a3634f7f40453634609925131f9b0c11903b06d0cc7270be3f0c372/botocore-1.12.143-py2.py3-none-any.whl (5.4MB)
    100% || 5.4MB 3.4MB/s 
Collecting futures<4.0.0,>=2.2.0; python_version == "2.6" or python_version == "2.7" (from s3transfer<0.3.0,>=0.2.0->boto3)
  Downloading https://files.pythonhosted.org/packages/2d/99/b2c4e9d5a30f6471e410a146232b4118e697fa3ffc06d6a65efde84debd0/futures-3.2.0-py2-none-any.whl
Collecting urllib3<1.25,>=1.20; python_version == "2.7" (from botocore<1.13.0,>=1.12.143->boto3)
  Using cached https://files.pythonhosted.org/packages/01/11/525b02e4acc0c747de8b6ccdab376331597c569c42ea66ab0a1dbd36eca2/urllib3-1.24.3-py2.py3-none-any.whl
Collecting python-dateutil<3.0.0,>=2.1; python_version >= "2.7" (from botocore<1.13.0,>=1.12.143->boto3)
  Using cached https://files.pythonhosted.org/packages/41/17/c62faccbfbd163c7f57f3844689e3a78bae1f403648a6afb1d0866d87fbb/python_dateutil-2.8.0-py2.py3-none-any.whl
Collecting docutils>=0.10 (from botocore<1.13.0,>=1.12.143->boto3)
  Downloading https://files.pythonhosted.org/packages/50/09/c53398e0005b11f7ffb27b7aa720c617aba53be4fb4f4f3f06b9b5c60f28/docutils-0.14-py2-none-any.whl (543kB)
    100% || 552kB 1.1MB/s 
Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.13.0,>=1.12.143->boto3)
  Using cached https://files.pythonhosted.org/packages/73/fb/00a976f728d0d1fecfe898238ce23f502a721c0ac0ecfedb80e0d88c64e9/six-1.12.0-py2.py3-none-any.whl
matplotlib 1.3.1 requires nose, which is not installed.
matplotlib 1.3.1 requires tornado, which is not installed.
Installing collected packages: futures, urllib3, jmespath, six, python-dateutil, docutils, botocore, s3transfer, boto3
Successfully installed boto3-1.9.143 botocore-1.12.143 docutils-0.14 futures-3.2.0 jmespath-0.9.4 python-dateutil-2.8.0 s3transfer-0.2.0 six-1.12.0 urllib3-1.24.3

As you see in above log it’s complaining about missing nose and tornado dependencies.

Just execute below command to install both dependencies.

bash1.2 $ pip install nose --user
bash1.2 $ pip install tornado --user

That’s it.

What is next? Setup Amazon AWS Credentials.

  1. Open credentials file using command
    • vi ~/.aws/credentials
  2. Add below line to the file.

bash-3.2$ cat ~/.aws/credentials
[default]
aws_access_key_id=AKIAYXNIWNWKSWIY27AIF
aws_secret_access_key=AKIAYXNIAEKDIY27AIF

How to set default Amazon EC2 region?

  1. Open config file using command
    • vi ~/.aws/config
  2. Add below lines to the file.

[default]
region=us-east-2

That’s it. Now you have successfully setup Boto3 and you are now good to run Ansible command and Amazon CLI.

The post How to install Boto3 and set Amazon EC2 Keys? Boto: A Python interface SDK for Amazon Web Services appeared first on Crunchify.

What is Ansible pre_tasks? How to Update OS, Install Python and Install JRE on Remote Host?

$
0
0

What is Ansible pre_tasks? How to Update OS, Install Python and Install JDK on Remote Host

What is pre_tasks in Ansible?

pre_tasks is a task which Ansible executes before executing any tasks mentioned in .yml file.

Consider this scenario. You provisioned a new instance on Amazon EC2 cloud or Google Cloud. First thing you need to do is to install OS updates. Then install latest Python, Install Java and so on.

Once all of above pre tasks are done, you need to copy your application and start those applications. It’s very mandatory to install all basic binaries before you copy your application dependencies.

In this tutorial we will go over all details on how to execute pre tasks using Ansible pre_task tag.

What is Ansible pre_tasks? Update OS, Install Python and Install JRE on Remote Host?

We will follow below scenario in this tutorial:

  1. create file crunchify-hosts file and add an IP on which we will execute pre_task.
  2. create file crunchify-install-python-java.yml which is ansible playbook.
    • pre_task: install python-simplejson
    • pre_task: install python-minimal
    • pre_task: install system update
    • pre_task: install latest JRE
  3. Get Python version
  4. Get Java version
  5. Print all debug results
  6. run command ansible-playbook -i ./crunchify-hosts crunchify-install-python-java.yml which will perform all our tasks

crunchify-hosts file

[local]
localhost ansible_connection=local ansible_python_interpreter=python

[crunchify]
13.58.187.197

[crunchify:vars]
ansible_ssh_user=ubuntu
ansible_ssh_private_key_file=/Users/crunchify/Documents/ansible/crunchify.pem
ansible_python_interpreter=/usr/bin/python3

Here as you see i’m using crunchify.pem file for password less authentication. I can simply connect to my host without any password prompt.

crunchify-install-python-java.yml file

We are using register keyword in Ansible to register variable. It stores the return value of raw tasks.

With the help of debug and stdout_lines, you can print result on command line.

---
- hosts: crunchify
  become: yes

  pre_tasks:
     - raw: sudo apt-get -y install python-simplejson
       register: py_simple_output
     - raw: sudo apt-get -y install python-minimal
       register: py_minimal_output
     - raw: sudo apt-get update
       register: system_output
     - raw: sudo apt-get install -y default-jre
       register: java_output

  tasks:
  
    - debug: 
        var=py_simple_output.stdout_lines

    - debug: 
        var=py_minimal_output.stdout_lines

    - debug: 
        var=system_output.stdout_lines

    - debug: 
        var=java_output.stdout_lines

    - name: get Python version
      shell: python --version 2>&1
      register: py_output
    
    - debug: 
        var=py_output.stdout_lines
      
    - name: get Java version
      shell: java --version 2>&1
      register: java_output
    
    - debug: 
        var=java_output.stdout_lines

Run command:

ansible-playbook -i ./crunchify-hosts crunchify-install-python-java.yml

Ansible Output:

bash1.2 $ ansible-playbook -i ./crunchify-hosts crunchify-install-python-java.yml

PLAY [crunchify] ***************************************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************************
ok: [13.58.187.197]

TASK [raw] *********************************************************************************************************************************************************
changed: [13.58.187.197]

TASK [raw] *********************************************************************************************************************************************************
changed: [13.58.187.197]

TASK [raw] *********************************************************************************************************************************************************
changed: [13.58.187.197]

TASK [raw] *********************************************************************************************************************************************************
changed: [13.58.187.197]

TASK [debug] *******************************************************************************************************************************************************
ok: [13.58.187.197] => {
    "py_simple_output.stdout_lines": [
        "", 
        "Reading package lists... 0%", 
        "", 
        "Reading package lists... 100%", 
        "", 
        "Reading package lists... Done", 
        "", 
        "", 
        "Building dependency tree... 0%", 
        "", 
        "Building dependency tree... 50%", 
        "", 
        "Building dependency tree       ", 
        "", 
        "", 
        "Reading state information... 0%", 
        "", 
        "Reading state information... Done", 
        "", 
        "python-simplejson is already the newest version (3.13.2-1).", 
        "0 upgraded, 0 newly installed, 0 to remove and 76 not upgraded."
    ]
}

TASK [debug] *******************************************************************************************************************************************************
ok: [13.58.187.197] => {
    "py_minimal_output.stdout_lines": [
        "", 
        "Reading package lists... 0%", 
        "", 
        "Reading package lists... 100%", 
        "", 
        "Reading package lists... Done", 
        "", 
        "", 
        "Building dependency tree... 0%", 
        "", 
        "Building dependency tree... 50%", 
        "", 
        "Building dependency tree       ", 
        "", 
        "", 
        "Reading state information... 0%", 
        "", 
        "Reading state information... Done", 
        "", 
        "python-minimal is already the newest version (2.7.15~rc1-1).", 
        "0 upgraded, 0 newly installed, 0 to remove and 76 not upgraded."
    ]
}

TASK [debug] *******************************************************************************************************************************************************
ok: [13.58.187.197] => {
    "system_output.stdout_lines": [
        "", 
        "0% [Working]", 
        "            ", 
        "Hit:1 http://us-east-2.ec2.archive.ubuntu.com/ubuntu bionic InRelease", 
        "", 
        "0% [Connecting to security.ubuntu.com (91.189.88.162)]", 
        "                                                      ", 
        "Hit:2 http://us-east-2.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease", 
        "", 
        "                                                      ", 
        "Get:3 http://us-east-2.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]", 
        "", 
        "0% [Connecting to security.ubuntu.com (91.189.88.162)]", 
        "0% [1 InRelease gpgv 242 kB] [Connecting to security.ubuntu.com (91.189.88.162)", 
        "                                                                               ", 
        "0% [Connecting to security.ubuntu.com (91.189.88.162)]", 
        "0% [2 InRelease gpgv 88.7 kB] [Connecting to security.ubuntu.com (91.189.88.162", 
        "                                                                               ", 
        "0% [Waiting for headers]", 
        "0% [3 InRelease gpgv 74.6 kB] [Waiting for headers]", 
        "                                                   ", 
        "Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease", 
        "", 
        "                                                   ", 
        "0% [3 InRelease gpgv 74.6 kB]", 
        "                             ", 
        "0% [Working]", 
        "0% [4 InRelease gpgv 88.7 kB]", 
        "                             ", 
        "100% [Working]", 
        "              ", 
        "Fetched 74.6 kB in 0s (249 kB/s)", 
        "", 
        "Reading package lists... 0%", 
        "", 
        "Reading package lists... 5%", 
        "", 
        "Reading package lists... 8%", 
        "", 
        "Reading package lists... 53%", 
        "", 
        "Reading package lists... 79%", 
        "", 
        "Reading package lists... 99%", 
        "", 
        "Reading package lists... Done", 
        ""
    ]
}

TASK [debug] *******************************************************************************************************************************************************
ok: [13.58.187.197] => {
    "java_output.stdout_lines": [
        "", 
        "Reading package lists... 0%", 
        "", 
        "Reading package lists... 100%", 
        "", 
        "Reading package lists... Done", 
        "", 
        "", 
        "Building dependency tree... 0%", 
        "", 
        "Building dependency tree... 50%", 
        "", 
        "Building dependency tree       ", 
        "", 
        "", 
        "Reading state information... 0%", 
        "", 
        "Reading state information... Done", 
        "", 
        "default-jre is already the newest version (2:1.11-68ubuntu1~18.04.1).", 
        "0 upgraded, 0 newly installed, 0 to remove and 76 not upgraded."
    ]
}

TASK [get Python version] ******************************************************************************************************************************************
changed: [13.58.187.197]

TASK [debug] *******************************************************************************************************************************************************
ok: [13.58.187.197] => {
    "py_output.stdout_lines": [
        "Python 2.7.15rc1"
    ]
}

TASK [get Java version] ********************************************************************************************************************************************
changed: [13.58.187.197]

TASK [debug] *******************************************************************************************************************************************************
ok: [13.58.187.197] => {
    "java_output.stdout_lines": [
        "openjdk 11.0.2 2019-01-15", 
        "OpenJDK Runtime Environment (build 11.0.2+9-Ubuntu-3ubuntu118.04.3)", 
        "OpenJDK 64-Bit Server VM (build 11.0.2+9-Ubuntu-3ubuntu118.04.3, mixed mode, sharing)"
    ]
}

PLAY RECAP *********************************************************************************************************************************************************
13.58.187.197              : ok=13   changed=6    unreachable=0    failed=0

That’s it.

As you see, in this tutorial we have install Python, java and system updates on remote host. Also, returned result back to mac terminal Window.

What’s next?

Try checking out tutorial on How to copy File, Directory or Script from localhost to Remote host.

The post What is Ansible pre_tasks? How to Update OS, Install Python and Install JRE on Remote Host? appeared first on Crunchify.

Ansible – How to Grep (ps -few) and Kill any linux process running on Remote Host?

$
0
0

Ansible – How to Grep (ps -few) and Kill any linux process running on Remote Host

Ansible is pretty amazing system admin tool. We have published number of articles on Ansible in last few weeks on how to copy files on remote host, How to Execute Commands on remote Hosts, how to install Java, Python on remote host and so on.

In this tutorial, we will go over how to grep java process running on remote host and kill that remote process using simple ansible playbook.

Here are the steps we will do in this tutorial:

ubuntu@ip-172-31-10-150:~$ nohup java CrunchifyAlwaysRunningProgram &
[1] 18174
ubuntu@ip-172-31-10-150:~$ nohup: ignoring input and appending output to 'nohup.out'

How to check if process is started and running on remote host?

Ansible - How to Grep (ps -few) and Kill Process running on Remote Host?

check out process ID 18174.

ubuntu@ip-172-31-10-150:~$ ps -few | grep CrunchifyAlwaysRunningProgram
ubuntu   18174 15069  1 15:15 pts/0    00:00:00 java CrunchifyAlwaysRunningProgram
ubuntu   18187 15069  0 15:16 pts/0    00:00:00 grep --color=auto CrunchifyAlwaysRunningProgram

  • create file crunchify-hosts file which has remote host IP
  • create file crunchify-grep-kill-process.yml with has ansible tasks for grep and kill java process
  • run command: ansible-playbook -i ./crunchify-hosts crunchify-grep-kill-process.yml
  • check result on macOS terminal console

crunchify-hosts file

[local]
localhost ansible_connection=local ansible_python_interpreter=python

[crunchify]
3.16.83.84

[crunchify:vars]
ansible_ssh_user=ubuntu
ansible_ssh_private_key_file=/Users/crunchify/Documents/ansible/crunchify.pem
ansible_python_interpreter=/usr/bin/python3

File contains remote IP address and credentials which will help ansible to login without password.

crunchify-grep-kill-process.yml file

---
- hosts: crunchify
  become: yes

  tasks:
    - name: Get running processes list from remote host
      ignore_errors: yes
      shell: "ps -few | grep CrunchifyAlwaysRunningProgram | awk '{print $2}'"
      register: running_processes

    - name: Kill running processes
      ignore_errors: yes
      shell: "kill {{ item }}"
      with_items: "{{ running_processes.stdout_lines }}"

    - wait_for:
        path: "/proc/{{ item }}/status"
        state: absent
      with_items: "{{ running_processes.stdout_lines }}"
      ignore_errors: yes
      register: crunchify_processes

    - name: Force kill stuck processes
      ignore_errors: yes
      shell: "kill -9 {{ item }}"
      with_items: "{{ crunchify_processes.results | select('failed') | map(attribute='item') | list }}"

Here ansible playbook file is getting all java processes, killing it using simple kill -9 command.

Execute Ansible Playbook:

bash1.2 $ ansible-playbook -i ./crunchify-hosts crunchify-grep-kill-process.yml

PLAY [crunchify] **************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************
ok: [3.16.83.84]

TASK [Get running processes list from remote host] ****************************************************************************************************************************
changed: [3.16.83.84]

TASK [Kill running processes] *************************************************************************************************************************************************
changed: [3.16.83.84] => (item=18174)
failed: [3.16.83.84] (item=18342) => {"changed": true, "cmd": "kill 18342", "delta": "0:00:00.002602", "end": "2019-05-10 15:20:36.957062", "item": "18342", "msg": "non-zero return code", "rc": 1, "start": "2019-05-10 15:20:36.954460", "stderr": "/bin/sh: 1: kill: No such process", "stderr_lines": ["/bin/sh: 1: kill: No such process"], "stdout": "", "stdout_lines": []}
failed: [3.16.83.84] (item=18344) => {"changed": true, "cmd": "kill 18344", "delta": "0:00:00.002648", "end": "2019-05-10 15:20:38.479354", "item": "18344", "msg": "non-zero return code", "rc": 1, "start": "2019-05-10 15:20:38.476706", "stderr": "/bin/sh: 1: kill: No such process", "stderr_lines": ["/bin/sh: 1: kill: No such process"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [wait_for] ***************************************************************************************************************************************************************
ok: [3.16.83.84] => (item=18174)
ok: [3.16.83.84] => (item=18342)
ok: [3.16.83.84] => (item=18344)

TASK [Force kill stuck processes] *********************************************************************************************************************************************

PLAY RECAP ********************************************************************************************************************************************************************
3.16.83.84                 : ok=4    changed=2    unreachable=0    failed=0

How to verify?

Just try to grep process again on remote host.

ubuntu@ip-172-31-10-150:~$ ps -few | grep CrunchifyAlwaysRunningProgram
ubuntu   18484 15069  0 15:22 pts/0    00:00:00 grep --color=auto CrunchifyAlwaysRunningProgram

As you notice, you won’t see process ID 18174 in list and there isn’t any java process running.

That’s it.

This is the simplest way to grep Java process and kill using Ansible. Let me know if you face any issue running this Ansible playbook.

The post Ansible – How to Grep (ps -few) and Kill any linux process running on Remote Host? appeared first on Crunchify.

How to Install, Setup and Execute 1st Amazon AWS CLI (Command Line Interface) Command?

$
0
0

How to install, setup and execute Amazon AWS CLI

There is no doubt Amazon AWS is the biggest public cloud provider out there. I personally started playing with Amazon AWS for few of Crunchify’s clients and I must say AWS is so flexible.

I use my Macbook Pro for all of my development activities. If you decide to use Amazon AWS cloud for your project then first thing you need is to install Amazon CLI (Command Line Interface) to start automating your basic Amazon AWS operations.

Amazon AWS - Billing public cloud provider

In this tutorial we will go over steps to install Amazon CLI on macOS.

Let’s get started:

Step-1

Check you have python installed on your system.

bash-3.2$ python --version
Python 3.7.2

If you don’t see latest version of Python then just install using below command:

bash-3.2$ brew install python

Step-2

Download latest Amazon AWS CLI bundle.

bash-3.2$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11.1M  100 11.1M    0     0  5598k      0  0:00:02  0:00:02 --:--:-- 5600k

Step-3

Unzip awscli-bundle.zip.

bash-3.2$ unzip awscli-bundle.zip
Archive:  awscli-bundle.zip
replace awscli-bundle/install? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: awscli-bundle/install   
  inflating: awscli-bundle/packages/botocore-1.12.145.tar.gz  
  inflating: awscli-bundle/packages/futures-3.2.0.tar.gz  
  inflating: awscli-bundle/packages/docutils-0.14.tar.gz  
  inflating: awscli-bundle/packages/virtualenv-15.1.0.tar.gz  
  inflating: awscli-bundle/packages/urllib3-1.22.tar.gz  
  inflating: awscli-bundle/packages/rsa-3.4.2.tar.gz  
  inflating: awscli-bundle/packages/urllib3-1.24.3.tar.gz  
  inflating: awscli-bundle/packages/ordereddict-1.1.tar.gz  
  inflating: awscli-bundle/packages/simplejson-3.3.0.tar.gz  
  inflating: awscli-bundle/packages/s3transfer-0.2.0.tar.gz  
  inflating: awscli-bundle/packages/python-dateutil-2.6.1.tar.gz  
  inflating: awscli-bundle/packages/jmespath-0.9.4.tar.gz  
  inflating: awscli-bundle/packages/PyYAML-3.13.tar.gz  
  inflating: awscli-bundle/packages/argparse-1.2.1.tar.gz  
  inflating: awscli-bundle/packages/pyasn1-0.4.5.tar.gz  
  inflating: awscli-bundle/packages/colorama-0.3.9.tar.gz  
  inflating: awscli-bundle/packages/python-dateutil-2.8.0.tar.gz  
  inflating: awscli-bundle/packages/awscli-1.16.155.tar.gz  
  inflating: awscli-bundle/packages/six-1.12.0.tar.gz  
  inflating: awscli-bundle/packages/setup/setuptools_scm-1.15.7.tar.gz

Step-4

Run installer as a sudoer.

bash-3.2$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Password:
Running cmd: /usr/bin/python virtualenv.py --no-download --python /usr/bin/python /usr/local/aws
Running cmd: /usr/local/aws/bin/pip install --no-cache-dir --no-index --find-links file:///Users/crunchify/Documents/ansible/awscli-bundle/packages/setup setuptools_scm-1.15.7.tar.gz
Running cmd: /usr/local/aws/bin/pip install --no-cache-dir --no-index --find-links file:///Users/crunchify/Documents/ansible/awscli-bundle/packages awscli-1.16.155.tar.gz
Symlink already exists: /usr/local/bin/aws
Removing symlink.
You can now run: /usr/local/bin/aws --version

Step-5

Verify. How to check if Amazon CLI is installed successfully?

bash-3.2$ aws --version
aws-cli/1.16.155 Python/2.7.10 Darwin/18.5.0 botocore/1.12.145

That’s it. You are all set. Amazon CLI is successfully installed now.

Setup Amazon AWS CLI:

  • Go to: https://console.aws.amazon.com/iam/home?#/users
  • Create user

Create AWS User AMI - Crunchify Tips

Set permissions

Amazon AWS Admin and EC2 full access - Crunchify Tips

Download Amazon AWS Access Key ID, Secret Access Key

Download Amazon AWS Access Key ID, Secret Access Key

Just type aws configure command. Enter your Amazon Access key ID and Secret access key.

bash-3.2$ aws configure
AWS Access Key ID [****************QTOR]: 
AWS Secret Access Key [****************4taa]: 
Default region name [us-east-2]: 
Default output format [json]:

And you are all set. You have successfully setup Amazon AWS CLI.

Run your first Amazon AWS CLI command:

Crunchify Amazon EC2 instance details

As you see above, I do have one Amazon EC2 instance up and running, use describe-instances command to get all details about instance.

bash-3.2$ aws ec2 describe-instances

{
	"Reservations": [
		{
			"Instances": [{
					"Monitoring": {
						"State": "disabled"
					},
					"PublicDnsName": "",
					"StateReason": {
						"Message": "Client.UserInitiatedShutdown: User initiated shutdown",
						"Code": "Client.UserInitiatedShutdown"
					},
					"State": {
						"Code": 48,
						"Name": "terminated"
					},
					"EbsOptimized": false,
					"LaunchTime": "2019-05-09T12:33:20.000Z",
					"ProductCodes": [],
					"CpuOptions": {
						"CoreCount": 1,
						"ThreadsPerCore": 1
					},
					"StateTransitionReason": "User initiated (2019-05-10 15:33:23 GMT)",
					"InstanceId": "i-02f2a6661658d3ef2",
					"EnaSupport": true,
					"ImageId": "ami-06088b0de148c2bac",
					"PrivateDnsName": "",
					"KeyName": "crunchify",
					"SecurityGroups": [],
					"ClientToken": "",
					"InstanceType": "t2.micro",
					"CapacityReservationSpecification": {
						"CapacityReservationPreference": "open"
					},
					"NetworkInterfaces": [],
					"Placement": {
						"Tenancy": "default",
						"GroupName": "",
						"AvailabilityZone": "us-east-2a"
					},
					"Hypervisor": "xen",
					"BlockDeviceMappings": [],
					"Architecture": "x86_64",
					"RootDeviceType": "ebs",
					"RootDeviceName": "/dev/sda1",
					"VirtualizationType": "hvm",
					"Tags": [{
						"Value": "worker",
						"Key": "Name"
					}],
					"HibernationOptions": {
						"Configured": false
					},
					"AmiLaunchIndex": 0
				},
				{
					"Monitoring": {
						"State": "disabled"
					},
					"PublicDnsName": "ec2-18-188-240-188.us-east-2.compute.amazonaws.com",
					"State": {
						"Code": 16,
						"Name": "running"
					},
					"EbsOptimized": false,
					"LaunchTime": "2019-05-09T12:33:20.000Z",
					"PublicIpAddress": "18.188.240.188",
					"PrivateIpAddress": "172.31.1.223",
					"ProductCodes": [],
					"VpcId": "vpc-8b4655e3",
					"CpuOptions": {
						"CoreCount": 1,
						"ThreadsPerCore": 1
					},
					"StateTransitionReason": "",
					"InstanceId": "i-0e19bc4bb04173c6a",
					"EnaSupport": true,
					"ImageId": "ami-06088b0de148c2bac",
					"PrivateDnsName": "ip-172-31-1-223.us-east-2.compute.internal",
					"KeyName": "crunchify",
					"SecurityGroups": [{
						"GroupName": "launch-wizard-4",
						"GroupId": "sg-06bd2ee5d14e38797"
					}],
					"ClientToken": "",
					"SubnetId": "subnet-c2447faa",
					"InstanceType": "t2.micro",
					"CapacityReservationSpecification": {
						"CapacityReservationPreference": "open"
					},
					"NetworkInterfaces": [{
						"Status": "in-use",
						"MacAddress": "02:e4:a8:93:ad:56",
						"SourceDestCheck": true,
						"VpcId": "vpc-8b4655e3",
						"Description": "",
						"NetworkInterfaceId": "eni-0b57a08339236e849",
						"PrivateIpAddresses": [{
							"PrivateDnsName": "ip-172-31-1-223.us-east-2.compute.internal",
							"PrivateIpAddress": "172.31.1.223",
							"Primary": true,
							"Association": {
								"PublicIp": "18.188.240.188",
								"PublicDnsName": "ec2-18-188-240-188.us-east-2.compute.amazonaws.com",
								"IpOwnerId": "amazon"
							}
						}],
						"PrivateDnsName": "ip-172-31-1-223.us-east-2.compute.internal",
						"InterfaceType": "interface",
						"Attachment": {
							"Status": "attached",
							"DeviceIndex": 0,
							"DeleteOnTermination": true,
							"AttachmentId": "eni-attach-06cb447cd085d5818",
							"AttachTime": "2019-05-09T12:33:20.000Z"
						},
						"Groups": [{
							"GroupName": "launch-wizard-4",
							"GroupId": "sg-06bd2ee5d14e38797"
						}],
						"Ipv6Addresses": [],
						"OwnerId": "600038600370",
						"PrivateIpAddress": "172.31.1.223",
						"SubnetId": "subnet-c2447faa",
						"Association": {
							"PublicIp": "18.188.240.188",
							"PublicDnsName": "ec2-18-188-240-188.us-east-2.compute.amazonaws.com",
							"IpOwnerId": "amazon"
						}
					}],
					"SourceDestCheck": true,
					"Placement": {
						"Tenancy": "default",
						"GroupName": "",
						"AvailabilityZone": "us-east-2a"
					},
					"Hypervisor": "xen",
					"BlockDeviceMappings": [{
						"DeviceName": "/dev/sda1",
						"Ebs": {
							"Status": "attached",
							"DeleteOnTermination": true,
							"VolumeId": "vol-077e7eb58ca59daea",
							"AttachTime": "2019-05-09T12:33:20.000Z"
						}
					}],
					"Architecture": "x86_64",
					"RootDeviceType": "ebs",
					"RootDeviceName": "/dev/sda1",
					"VirtualizationType": "hvm",
					"Tags": [{
						"Value": "Crunchify",
						"Key": "Name"
					}],
					"HibernationOptions": {
						"Configured": false
					},
					"AmiLaunchIndex": 1
				}
			],
			"ReservationId": "r-00163c475d0a29a3d",
			"Groups": [],
			"OwnerId": "600038600370"
		}
	]
}

And that’s it. You are all set. You have successfully performed all below tasks:

  • Install Amazon AWS CLI
  • Setup Amazon AWS CLI
  • Executed your 1st command and got result

Let me know if you face any issue running AWS AWS CLI command.

The post How to Install, Setup and Execute 1st Amazon AWS CLI (Command Line Interface) Command? appeared first on Crunchify.

How to Create, Start and Configure Amazon EC2 instance using simple Ansible Script? (spawn VM remotely)

$
0
0

How to Create, Start and configure Amazon AWS using Simple Ansible State

Amazon AWS is no doubt the best public cloud out there. As we discussed in previous tutorials, Ansible is a very handy tool for sysops to maintain their company infrastructure.

In this tutorial we will go over steps on how to create, start and setup Amazon EC2 instance using simple Ansible scripts.

Details:

  1. specify instance_type: t2.micro
  2. specify security_group: crunchify_security_grp
    • Change the security group as per your need.
  3. specify image: ami-crunchify231di
    • You need to create Amazon Image before executing this.
  4. specify keypair: crunchify
    • This is your security key for password less login.
  5. choose default region: us-east-2
    • Default region that I would recommend.
  6. number of VMs you want to start: 1
    • start with VM 1.
  7. create basic firewall group
  8. create Amazon EC2 instance
  9. Wait for instance to come up
  10. Get IP address and save in file crunchify.txt file
    • you need to create crunchify.txt before executing this ansible script.
  11. Tag newly created instance as crunchify

Step-1)

Install ansible on macOS. Make sure you have setup Ansible right way 🙂

Step-2)

You need to export your AWS Access Key and Secret Access Key. Please follow tutorial on how to Setup Amazon AWS CLI to get your keys.

export AWS_ACCESS_KEY_ID=JHKHLJLHJHJK2SHIY27AIF
export AWS_SECRET_ACCESS_KEY=QLKJDKIAYXNIWN2ZHIY27AI54345HKLHJ

Step-3) Create crunchify-host file

[local]
localhost ansible_connection=local ansible_python_interpreter=python

Step-4) Create crunchify-ec2.yml file

---
  - name: Provision an EC2 Instance. Detailed steps by Crunchify.
    hosts: local
    connection: local
    gather_facts: False
    tags: provisioning
    # required parameters
    vars:
      instance_type: t2.micro
      security_group: crunchify_security_grp
      image: ami-crunchify231di
      keypair: crunchify
      region: us-east-2 # Change the Region
      count: 1
 
    # Task that will be used to Launch/Create an EC2 Instance
    tasks:

      - name: Create a security group
        local_action: 
          module: ec2_group
          name: "{{ security_group }}"
          description: Security Group for Crunchify's EC2 Servers
          region: "{{ region }}"
          rules:
            - proto: tcp
              from_port: 22
              to_port: 22
              cidr_ip: 0.0.0.0/0
            - proto: tcp
              from_port: 8080
              to_port: 8080
              cidr_ip: 0.0.0.0/0
            - proto: tcp
              from_port: 443
              to_port: 443
              cidr_ip: 0.0.0.0/0
          rules_egress:
            - proto: all
              cidr_ip: 0.0.0.0/0
        register: basic_firewall
        
      - name: Launching Crunchify's the new EC2 Instance
        local_action: ec2 
                      group={{ security_group }} 
                      instance_type={{ instance_type}} 
                      image={{ image }} 
                      wait=true
                      wait_timeout=500 
                      region={{ region }} 
                      keypair={{ keypair }}
                      count={{count}}
        register: ec2_crunchify

      - name: Add the newly created EC2 instance(s) to the local host group
        local_action: lineinfile 
                      path=crunchify.txt
                      regexp={{ item.public_ip }} 
                      insertafter="[crunchify]" line={{ item.public_ip }}
        with_items: '{{ec2_crunchify.instances}}'

      - name: Add new instance to Crunchify's host group
        add_host:
          hostname: "{{ item.public_ip }}"
          groupname: launched
        with_items: "{{ ec2_crunchify.instances }}"

      - name: Let's wait for SSH to come up. Usually that takes ~10 seconds
        local_action: wait_for 
                      host={{ item.public_ip }} 
                      port=22 
                      state=started
        with_items: '{{ ec2_crunchify.instances }}'

      - name: Add tag to Instance(s)
        local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
        with_items: '{{ ec2_crunchify.instances }}'
        args:
          tags:
            Name: crunchify

Step-5) Execute ansible playbook

ansible-playbook -i ./hosts crunchify-ec2.yml

Ansible Result:

bash3.2 $ ansible-playbook -i ./hosts crunchify-ec2.yml 

PLAY [Provision an EC2 Instance. Detailed steps by Crunchify.] ****************************************************************************************************************

TASK [Create a security group] ************************************************************************************************************************************************
ok: [localhost -> localhost]

TASK [Master - Launch the new EC2 Instance] ***********************************************************************************************************************************
changed: [localhost -> localhost]

TASK [Add the newly created EC2 instance(s) to the local host group] **********************************************************************************************************
changed: [localhost -> localhost] => (item={u'ramdisk': None, u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-41-108.us-east-2.compute.internal', u'block_device_mapping': 
{u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-06d37e8354c769d93'}}, u'key_name': u'crunchify', u'public_ip': u'3.19.60.48', u'image_id': u'ami-crunchify231di', u'tenancy': u'default', u'private_ip': u'172.31.41.108', u'groups': 
{u'sg-0eb80f388be5a7c35': u'crunchify_security_grp'}, u'public_dns_name': u'ec2-3-19-60-48.us-east-2.compute.amazonaws.com', u'state_code': 16, u'id': u'i-0e447dd1223a40f8e', u'tags': {}, u'placement': u'us-east-2c', u'ami_launch_index': u'0', u'dns_name': u'ec2-3-19-60-48.us-east-2.compute.amazonaws.com', u'region': u'us-east-2', u'ebs_optimized': False, u'launch_time': u'2019-05-10T18:48:18.000Z', u'instance_type': u't2.micro', u'state': u'running', u'architecture': u'x86_64', u'hypervisor': u'xen', u'virtualization_type': u'hvm', u'root_device_name': u'/dev/sda1'})

TASK [Add new instance to host group] *****************************************************************************************************************************************
changed: [localhost] => (item={u'ramdisk': None, u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-41-108.us-east-2.compute.internal', u'block_device_mapping': 
{u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-06d37e8354c769d93'}}, u'key_name': u'crunchify', u'public_ip': u'3.19.60.48', u'image_id': u'ami-crunchify231di', u'tenancy': u'default', u'private_ip': u'172.31.41.108', u'groups': 
{u'sg-0eb80f388be5a7c35': u'crunchify_security_grp'}, u'public_dns_name': u'ec2-3-19-60-48.us-east-2.compute.amazonaws.com', u'state_code': 16, u'id': u'i-0e447dd1223a40f8e', u'tags': {}, u'placement': u'us-east-2c', u'ami_launch_index': u'0', u'dns_name': u'ec2-3-19-60-48.us-east-2.compute.amazonaws.com', u'region': u'us-east-2', u'ebs_optimized': False, u'launch_time': u'2019-05-10T18:48:18.000Z', u'instance_type': u't2.micro', u'state': u'running', u'architecture': u'x86_64', u'hypervisor': u'xen', u'virtualization_type': u'hvm', u'root_device_name': u'/dev/sda1'})

TASK [Wait for SSH to come up] ************************************************************************************************************************************************
ok: [localhost -> localhost] => (item={u'ramdisk': None, u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-41-108.us-east-2.compute.internal', u'block_device_mapping': 
{u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-06d37e8354c769d93'}}, u'key_name': u'crunchify', u'public_ip': u'3.19.60.48', u'image_id': u'ami-crunchify231di', u'tenancy': u'default', u'private_ip': u'172.31.41.108', u'groups': 
{u'sg-0eb80f388be5a7c35': u'crunchify_security_grp'}, u'public_dns_name': u'ec2-3-19-60-48.us-east-2.compute.amazonaws.com', u'state_code': 16, u'id': u'i-0e447dd1223a40f8e', u'tags': {}, u'placement': u'us-east-2c', u'ami_launch_index': u'0', u'dns_name': u'ec2-3-19-60-48.us-east-2.compute.amazonaws.com', u'region': u'us-east-2', u'ebs_optimized': False, u'launch_time': u'2019-05-10T18:48:18.000Z', u'instance_type': u't2.micro', u'state': u'running', u'architecture': u'x86_64', u'hypervisor': u'xen', u'virtualization_type': u'hvm', u'root_device_name': u'/dev/sda1'})

TASK [Add tag to Instance(s)] *************************************************************************************************************************************************
changed: [localhost -> localhost] => (item={u'ramdisk': None, u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-41-108.us-east-2.compute.internal', u'block_device_mapping': 
{u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-06d37e8354c769d93'}}, u'key_name': u'crunchify', u'public_ip': u'3.19.60.48', u'image_id': u'ami-crunchify231di', u'tenancy': u'default', u'private_ip': u'172.31.41.108', u'groups': 
{u'sg-0eb80f388be5a7c35': u'crunchify_security_grp'}, u'public_dns_name': u'ec2-3-19-60-48.us-east-2.compute.amazonaws.com', u'state_code': 16, u'id': u'i-0e447dd1223a40f8e', u'tags': {}, u'placement': u'us-east-2c', u'ami_launch_index': u'0', u'dns_name': u'ec2-3-19-60-48.us-east-2.compute.amazonaws.com', u'region': u'us-east-2', u'ebs_optimized': False, u'launch_time': u'2019-05-10T18:48:18.000Z', u'instance_type': u't2.micro', u'state': u'running', u'architecture': u'x86_64', u'hypervisor': u'xen', u'virtualization_type': u'hvm', u'root_device_name': u'/dev/sda1'})

PLAY RECAP ********************************************************************************************************************************************************************
localhost                  : ok=6    changed=4    unreachable=0    failed=0

Let’s verify that new instance got created successfully with all our specifications

Go to Amazon AWS console to check instance.

Link: https://us-east-2.console.aws.amazon.com/ec2/v2/home?region=us-east-2#Instances:sort=instanceId

New Amazon EC2 instance was created - Crunchify Tips

Make sure you verify all your settings.

Amazon EC2 - new security group and instance type created - Tutorial by Crunchify

Check your Tags. This is very helpful if you are dealing with hundreds of instances.

Amazon EC2 - new tag and name created - Crunchify Tips

Check crunchify.txt file which has newly created hosts’s IP:

bash3.2 $ cat crunchify.txt 
18.217.28.189

That’s it. Congratulation. You have just created and started new EC2 instance on Amazon AWS cloud remotely using Ansible.

Let me know if you face any issue creating instance on Amazon EC2 cloud.

The post How to Create, Start and Configure Amazon EC2 instance using simple Ansible Script? (spawn VM remotely) appeared first on Crunchify.

Viewing all 1037 articles
Browse latest View live