How to Make Your S3 Bucket Public: A Simple Guide for Beginners

Amazon S3 (Simple Storage Service) is one of the most popular cloud storage solutions. Whether you're hosting static websites, sharing media files, or distributing software packages, there are times when making your S3 bucket public is necessary. But how do you do it without compromising security? Let’s walk through it step-by-step.


What is S3 and Why Make It Public?#

Illustration of confused person about S3

S3 allows you to store and retrieve any amount of data, from anywhere, at any time. Public access is useful when you want your files to be openly downloadable—no credentials needed. Use cases include:

  • Hosting a static website
  • Sharing public documentation
  • Providing downloadable files like media, zip archives, or datasets

Important: Be cautious—public access means anyone on the internet can view or download those files.


How to Make Your S3 Bucket Public#

There are two primary ways to make files in your S3 bucket publicly accessible:

1. Bucket Policy (Full Bucket Access)#

Illustration of bucket policy security

This method grants public access to all objects within a bucket.

Example Policy:#

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
  • What it does: Allows anyone to perform s3:GetObject (i.e., download files).
  • How to apply it:
aws s3api put-bucket-policy --bucket mybucket --policy file://public-read-policy.json
  • When to use: Great for hosting full public websites or making all files downloadable.

    For a deeper dive into IAM policies, visit AWS IAM Policies.


2. Object-Level ACL (Individual File Access)#

Illustration of team working on ACL file access

You can make just one file public without exposing the whole bucket.

Example:#

aws s3api put-object-acl --bucket mybucket --key myfile.zip --acl public-read
  • What it does: Grants public read access to just myfile.zip.

  • When to use: When you only want to share select files and keep others private.

    For more details on managing ACLs, see AWS ACL Documentation.


Why Public Access Might Be Helpful#

Making files public isn’t just convenient—it can power your apps and workflows:

  • Static Websites: Serve HTML/CSS/JS directly from S3.

  • Public Downloads: Let users grab resources without signing in.

  • Media Hosting: Share images, videos, or documents in a lightweight, scalable way.

    Looking for an easy way to manage your static websites? Check out Amazon S3 Static Website Hosting.


Best Practices and Considerations#

Before making your S3 bucket public, keep these tips in mind:

  • Security: Double-check that no sensitive data is exposed.

  • Use the right method: Policies for full-bucket access, ACLs for individual files.

  • Monitor usage: Enable access logs and CloudTrail to audit activity.

    Learn more about monitoring with AWS CloudTrail Logs.


Conclusion#

Making your S3 bucket (or objects) public can unlock powerful use cases—from hosting content to sharing files freely. Just remember:

  • Use bucket policies for broad access
  • Use ACLs for targeted, file-specific access
  • Monitor and audit access to stay secure

With just a few AWS CLI commands, your content can go live in minutes—safely and intentionally.

Looking to scale your infrastructure seamlessly? Supercharge your containerized workloads by adding AWS EKS clusters with Nife.io!

Tired of complex, time-consuming deployments? Nife.io makes it effortless with One-Click Deployment—so you can launch applications instantly, without the hassle.


How to Delete Specific Lines from a File Using Line Numbers

When you're working with text files—be it config files, logs, or source code—you may need to delete specific lines based on their line numbers. This might sound intimidating, but it’s actually quite easy once you know which tool to use.

In this post, we’ll walk through several methods to remove lines using line numbers, using command-line tools like sed, awk, and even Python. Whether you're a beginner or a seasoned developer, there’s a solution here for you.


The Basic Idea#


To delete a specific range of lines from a file:

  1. Identify the start line and end line.
  2. Use a tool or script to remove the lines between those numbers.
  3. Save the changes back to the original file.

Let’s break this down by method.


1. Using sed (Stream Editor)#


sed is a command-line utility that’s perfect for modifying files line-by-line.

Basic Syntax#

sed 'START_LINE,END_LINEd' filename > temp_file && mv temp_file filename
  • Replace START_LINE and END_LINE with actual numbers.
  • d tells sed to delete those lines.

Example#

To delete lines 10 through 20:

sed '10,20d' myfile.txt > temp_file && mv temp_file myfile.txt

With Variables#

START_LINE=10
END_LINE=20
sed "${START_LINE},${END_LINE}d" myfile.txt > temp_file && mv temp_file myfile.txt

📚 More on sed line deletion


2. Using awk#

awk is a pattern scanning tool. It’s ideal for skipping specific lines.

Syntax#

awk 'NR < START_LINE || NR > END_LINE' filename > temp_file && mv temp_file filename

Example#

awk 'NR < 10 || NR > 20' myfile.txt > temp_file && mv temp_file myfile.txt

This prints all lines except lines 10 through 20.

📚 Learn more about awk


3. Using head and tail#

Perfect when you only need to chop lines off the start or end.

Example#

Delete lines 10 to 20:

head -n 9 myfile.txt > temp_file
tail -n +21 myfile.txt >> temp_file
mv temp_file myfile.txt
  • head -n 9 gets lines before line 10.
  • tail -n +21 grabs everything from line 21 onward.

📚 tail command explained


4. Using perl#

perl is great for more advanced file manipulation.

Syntax#

perl -ne 'print unless $. >= START_LINE && $. <= END_LINE' filename > temp_file && mv temp_file filename

Example#

perl -ne 'print unless $. >= 10 && $. <= 20' myfile.txt > temp_file && mv temp_file myfile.txt
  • $. is the line number variable in perl.

📚 Perl I/O Line Numbering


5. Using Python#

For full control or if you’re already using Python in your workflow:

Example#

start_line = 10
end_line = 20
with open("myfile.txt", "r") as file:
lines = file.readlines()
with open("myfile.txt", "w") as file:
for i, line in enumerate(lines):
if i < start_line - 1 or i > end_line - 1:
file.write(line)

Python is especially useful if you need to add logic or conditions around what gets deleted.

📚 Working with files in Python


Conclusion#


There are plenty of ways to delete lines from a file based on line numbers:

  • Use sed for simple, fast command-line editing.
  • Choose awk for conditional line selection.
  • Go with head/tail for edge-case trimming.
  • Try perl if you’re comfortable with regex and quick one-liners.
  • Opt for Python when you need logic-heavy, readable scripts.

Explore Nife.io for modern cloud infrastructure solutions, or check out OIKOS to see how edge orchestration is done right.


Getting to Know Bitnami: What It Is, Why It Rocks, and How to Use It

If you’ve ever tried setting up a web application, you know how messy it can get—installing servers, configuring databases, dealing with software versions. That’s where Bitnami steps in to make your life easier.

In this post, we’ll break down what Bitnami is, why it’s so well-loved, and how you can start using it—whether you’re testing apps locally or deploying to the cloud.


What is Bitnami?#

Illustration of Bitnami app stack architecture: person examining question mark.

Think of Bitnami as your all-in-one app launcher. It provides pre-configured software stacks—basically bundles of apps like WordPress, Joomla, or Moodle, with all their required dependencies baked in.

You can run Bitnami on:

  • Your local computer (Mac, Windows, Linux).
  • Cloud providers like AWS, Google Cloud, or Azure.
  • Containers using Docker or Kubernetes.

Each Bitnami stack includes the app, a web server (like Apache), a database (like MySQL), and scripting languages (PHP, Python, etc.). It’s ready to go out of the box.

Explore Bitnami Application Catalog


Why is Bitnami So Popular?#

People love Bitnami because it makes app deployment almost effortless:

  • Zero hassle setup: No need to configure every component manually.
  • Works anywhere: Use it locally or on your favorite cloud.
  • Security focused: Regular updates and patches.
  • Totally free: Perfect for students, developers, and small teams.

Bitnami is especially handy when you're short on time but need something reliable and scalable.


Getting Started with Bitnami#

Illustration of Bitnami app deployment: people interacting with a computer displaying 70% progress, symbolizing deployment across local, cloud, and containers.

Step 1: Pick an App#

Go to the Bitnami website and choose the app you want—like WordPress, Redmine, or ERPNext.

Step 2: Choose How You Want to Run It#

  • Local install: Download the stack for your OS.
  • Cloud deployment: Launch the app directly to AWS, Azure, or GCP with one click.
  • Containers: Use their Docker images for ultimate portability.

Step 3: Follow the Setup Wizard#

Bitnami installers are beginner-friendly. Just follow the wizard, and your app will be up and running in minutes.

Here’s a step-by-step tutorial on deploying Bitnami WordPress on AWS


What Can You Do With Bitnami?#

Here are a few awesome use cases:

Test and Build Websites#

Create a local WordPress site to try out new themes and plugins without risking your live website.

Set Up E-Learning Platforms#

Deploy Moodle or Open edX for hosting online courses easily.

Developer Sandboxes#

Developers use Bitnami stacks to test APIs, apps, or backend systems quickly.

Run Business Tools#

Launch tools like Redmine for project tracking or ERPNext for business management.

Learn Cloud Hosting#

Bitnami removes the friction from deploying apps to the cloud, making it easier for beginners to experiment.

Read more about app deployment strategies


Why Use Bitnami?#

Here’s when Bitnami is the right fit:

  • You want your app running in minutes, not hours.
  • You don’t want to stress over configuration and dependencies.
  • You like security and want regular patches without manual updates.
  • You want to try different environments with minimal setup.

It’s also a great stepping stone into cloud development and containerization.


Conclusion#

Illustration of developer-friendly app deployment with Bitnami: person checking off items on a clipboard with a large checkmark above.

Bitnami is like the friendly co-pilot every dev wishes they had—it gives you a head start by simplifying app deployment and making experimentation frictionless.

Whether you're building a blog, launching a learning platform, or playing around with cloud architecture, Bitnami’s got your back.

Check out how nife.io manages fast deployments to see how we build scalable services at the edge.

Nife supports seamless Marketplace Deployments, enabling faster and more consistent app rollouts across environments.


A Beginner’s Guide to Using OAuth 2.0 with Amazon Cognito: Authorization Code Grant Made Simple

When you're building a web or mobile app, one of the first things you’ll need is a way to let users log in securely. That’s where Amazon Cognito comes in. It helps you manage authentication without having to build everything from scratch.

In this post, we’ll break down how to use Amazon Cognito with the OAuth 2.0 Authorization Code Grant flow—the secure and scalable way to handle user login.


What is Amazon Cognito?#

Illustration of Amazon Cognito features like user sign-up, login options, and secure access

Amazon Cognito is a user authentication and authorization service from AWS. Think of it as a toolbox for managing sign-ups, logins, and secure access to your app. Here’s what it can do:

  • Support multiple login options: Email, phone, or social logins (Google, Facebook, Apple).
  • Manage users: Sign-up, sign-in, and password recovery via user pools.
  • Access AWS services securely: Through identity pools.
  • Use modern authentication: Supports OAuth 2.0, OpenID Connect, and SAML.

📚 Learn more in the Amazon Cognito Documentation


Why Use Amazon Cognito?#

  • Scales with your app: Handles millions of users effortlessly.
  • Secure token management: Keeps user credentials and sessions safe.
  • Easy social logins: No need to build separate Google/Facebook integration.
  • Customizable: Configure user pools, password policies, and even enable MFA.
  • Tightly integrated with AWS: Works great with API Gateway, Lambda, and S3.

It’s like plugging in a powerful login system without reinventing the wheel.

🔍 Need a refresher on OAuth 2.0 concepts? Check out OAuth 2.0 and OpenID Connect Overview


How Amazon Cognito Works ?#

Diagram showing how Amazon Cognito User Pools and Identity Pools manage authentication and AWS access

Cognito is split into two parts:

1. User Pools#

  • Handles user sign-ups, sign-ins, and account recovery.
  • Provides access_token, id_token, and refresh_token for each user session.

2. Identity Pools#

  • Assigns temporary AWS credentials to authenticated users.
  • Uses IAM roles to control what each user can access.

When using OAuth 2.0, most of the action happens in the user pool.


Step-by-Step: Using OAuth 2.0 Authorization Code Grant with Cognito#

Flowchart of OAuth 2.0 Authorization Code Grant flow using Amazon Cognito

Step 1: Create a User Pool#

  1. Head over to the AWS Console and create a new User Pool.
  2. Under App Clients, create a client and:
    • Enable Authorization Code Grant.
    • Set your redirect URI (e.g., https://yourapp.com/callback).
    • Choose OAuth scopes like openid, email, and profile.
  3. Note down the App Client ID and Cognito domain name.

💡 Want to see this in action with JavaScript? Here's a quick read: Using OAuth 2.0 and Amazon Cognito with JavaScript


Step 2: Redirect Users to Cognito#

When someone clicks "Log In" on your app, redirect them to Cognito's OAuth2 authorize endpoint:

https://your-domain.auth.region.amazoncognito.com/oauth2/authorize?
response_type=code&
client_id=YOUR_CLIENT_ID&
redirect_uri=YOUR_REDIRECT_URI&
scope=openid+email

After login, Cognito will redirect back to your app with a code in the URL:

https://yourapp.com/callback?code=AUTH_CODE

📘 For more on how this flow works, check OAuth 2.0 Authorization Code Flow Explained


Step 3: Exchange Code for Tokens#

Use the code to request tokens from Cognito:

curl -X POST "https://your-domain.auth.region.amazoncognito.com/oauth2/token" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code" \
-d "client_id=YOUR_CLIENT_ID" \
-d "code=AUTH_CODE" \
-d "redirect_uri=YOUR_REDIRECT_URI"

Step 4: Use the Tokens#

Once you get the tokens:

{
"access_token": "...",
"id_token": "...",
"refresh_token": "...",
"token_type": "Bearer",
"expires_in": 3600
}
  • access_token: Use this to call your APIs.
  • id_token: Contains user info like name and email.
  • refresh_token: Helps you get new tokens when the current one expires.

Example API call:

curl -X GET "https://your-api.com/resource" \
-H "Authorization: Bearer ACCESS_TOKEN"

When to Use Authorization Code Grant?#

This flow is ideal for server-side apps. It keeps sensitive data (like tokens) away from the browser, making it more secure.


Why This Setup Rocks#

  • Security-first: Tokens are exchanged on the backend.
  • Scalable: Works even if your app grows to millions of users.
  • AWS-native: Plays nicely with other AWS services.

Conclusion#

Amazon Cognito takes the pain out of managing authentication. Combine it with OAuth 2.0’s Authorization Code Grant, and you’ve got a secure, scalable login system that just works. Start experimenting with Cognito and see how quickly you can secure your app. Stay tuned for more tutorials, and drop your questions below if you want help with setup!

If you're looking to take your environment management further, check out how Nife handles secrets and secure configurations. It's designed to simplify secret management while keeping your workflows safe and efficient.

Nife supports a wide range of infrastructure platforms, including AWS EKS. See how teams are integrating their EKS clusters with Nife to streamline operations and unlock more value from their cloud environments.

Troubleshooting PHP Installation: A Developer's Guide to Solving PHP Setup Issues

So, you're setting up PHP and things aren't going as smoothly as you hoped. Maybe you're staring at a php -v error or wondering why your server is throwing a 502 Bad Gateway at you. Don’t sweat it—we’ve all been there.

In this guide, we’re going to walk through the most common PHP installation issues, explain what’s happening behind the scenes, and show you how to fix them without losing your sanity. Whether you’re setting up PHP for the first time or maintaining an existing server, there’s something here for you.

Install PHP on Ubuntu Server


First, What Makes Up a PHP Setup?#

Illustration showing components of a PHP setup including PHP binary, php.ini file, extensions, PHP-FPM, and web server

Before diving into the fix-it steps, let’s quickly look at the key parts of a typical PHP setup:

  • PHP Binary – The main engine that runs your PHP scripts.
  • php.ini File – The config file that controls things like error reporting, memory limits, and file uploads.
  • PHP Extensions – Add-ons like MySQL drivers or image processing libraries.
  • PHP-FPM (FastCGI Process Manager) – Manages PHP processes when working with a web server like Nginx or Apache.
  • Web Server – Apache, Nginx, etc. It passes web requests to PHP and serves the results.

Understanding how these parts work together makes troubleshooting way easier. Now, let’s fix things up!


1. PHP Not Found? Let’s Install It#

Tried running php -v and got a "command not found" error? That means PHP isn’t installed—or your system doesn’t know where to find it.

Install PHP#

Visual representation of PHP installation on different operating systems including Ubuntu, CentOS, and macOS

On Ubuntu:

sudo apt update
sudo apt install php

On CentOS:

sudo yum install php

On macOS (with Homebrew):

brew install php

Verify Installation#

Run:

php -v

If that doesn’t work, check if PHP is in your system’s $PATH. If not, you’ll need to add it.

Full PHP install guide on phoenixnap


2. No php.ini File? Here’s the Fix#

You’ve installed PHP, but it’s not picking up your php.ini configuration file? You might see something like:

Loaded Configuration File => (none)

Find or Create php.ini#

Common locations:

  • /etc/php.ini
  • /usr/local/lib/php.ini
  • Bitnami stacks: /opt/bitnami/php/etc/php.ini

If missing, copy a sample config:

cp /path/to/php-*/php.ini-development /usr/local/lib/php.ini

Then restart PHP or PHP-FPM to apply the changes.

Understanding php.ini


3. Set Your PHPRC Variable#

Still no luck loading the config? Set the PHPRC environment variable to explicitly tell PHP where your config file lives:

export PHPRC=/usr/local/lib

To make it stick, add it to your shell config (e.g. ~/.bashrc or ~/.zshrc):

echo "export PHPRC=/usr/local/lib" >> ~/.bashrc
source ~/.bashrc

Learn more: PHP Environment Variables Explained


4. PHP-FPM Not Running? Restart It#

Getting a 502 Bad Gateway? That usually means PHP-FPM is down.

Restart PHP-FPM#

On Ubuntu/Debian:

sudo systemctl restart php7.0-fpm

On CentOS/RHEL:

sudo systemctl restart php-fpm

Bitnami stack:

sudo /opt/bitnami/ctlscript.sh restart php-fpm

Check if it's running:

ps aux | grep php-fpm

If not, check the logs (see below).


5. Development vs. Production Settings#

PHP offers two default config templates:

  • php.ini-development – More error messages, ideal for dev work.
  • php.ini-production – Safer settings, ideal for live sites.

Pick the one that fits your needs, and copy it to the right spot:

cp php.ini-production /usr/local/lib/php.ini

More details: PHP Runtime Configuration


6. Still Stuck? Check the Logs#

Logs are your best friends when troubleshooting.

PHP error log:

tail -f /var/log/php_errors.log

PHP-FPM error log:

tail -f /var/log/php-fpm.log

These will give you insight into config issues, missing extensions, and more.

Common PHP Errors & Fixes


Conclusion#

Conceptual image representing successful PHP setup and troubleshooting completion

Getting PHP working can be frustrating at first, but once you understand the pieces—PHP binary, php.ini, extensions, PHP-FPM, and the web server—it’s much easier to fix issues when they pop up.

To recap:

  • Install PHP
  • Make sure php.ini is where PHP expects
  • Set PHPRC if needed
  • Restart PHP-FPM if you're using Nginx/Apache
  • Check your logs

Once everything is running smoothly, your PHP-powered site or app will be good to go.

Simplify JSP Deployment – Powered by Nife.
Build, Deploy & Scale Apps Faster with Nife.

How to Convert PDF.js to UMD Format Automatically Using Babel

If you've used PDF.js in JavaScript projects, you might have noticed the pdfjs-dist package provides files in ES6 module format (.mjs). But what if your project needs UMD-compatible .js files instead?
In this guide, I'll show you how to automatically transpile PDF.js from .mjs to UMD using Babel—no manual conversion required.


Why UMD Instead of ES6 Modules?#

Explanation of when to choose UMD over ES6 Modules in JavaScript

Before we dive in, let’s clarify the difference:

ES6 Modules (.mjs)

  • Modern JavaScript standard
  • Works natively in newer browsers and Node.js
  • Uses import/export syntax

UMD (.js)

  • Works in older browsers, Node.js, and AMD loaders
  • Better for legacy projects or bundlers that don’t support ES6

If your environment doesn’t support ES6 modules, UMD is the way to go.

See how module formats differ →


The Solution: Automate Transpilation with Babel#

How Babel automates JavaScript transpilation between versions

Instead of searching for pre-built UMD files (which may not exist), we’ll use Babel to convert them automatically.

Step 1: Install Babel & Required Plugins#

First, install these globally (or locally in your project):

npm install --global @babel/cli @babel/preset-env @babel/plugin-transform-modules-umd
  • @babel/cli → Runs Babel from the command line
  • @babel/preset-env → Converts modern JS to compatible code
  • @babel/plugin-transform-modules-umd → Converts modules to UMD format

For more on Babel configurations, check out the official Babel docs.

Step 2: Create the Transpilation Script#

Save this as transpile_pdfjs.sh:

#!/bin/bash
# Check if Babel is installed
if ! command -v npx &> /dev/null; then
echo " Error: Babel (via npx) is required. Install Node.js first."
exit 1
fi
# Define source (.mjs) and destination (UMD .js) folders
SRC_DIR="pdfjs-dist/build"
DEST_DIR="pdfjs-dist/umd"
# Create the output folder if missing
mkdir -p "$DEST_DIR"
# Run Babel to convert .mjs → UMD .js
npx babel "$SRC_DIR" \
--out-dir "$DEST_DIR" \
--extensions ".mjs" \
--ignore "**/*.min.mjs" \
--presets @babel/preset-env \
--plugins @babel/plugin-transform-modules-umd
# Check if successful
if [ $? -eq 0 ]; then
echo " Success! UMD files saved in: $DEST_DIR"
else
echo " Transpilation failed. Check for errors above."
fi

Merge multiple PDFs into a single file effortlessly with our Free PDF Merger and Split large PDFs into smaller, manageable documents using our Free PDF Splitter.

Step 3: Run the Script#

  1. Make it executable:

    chmod +x transpile_pdfjs.sh
  2. Execute it:

    ./transpile_pdfjs.sh

What’s Happening?#

Automating JavaScript code conversion using Babel for cross-browser support

Checks for Babel → Ensures the tool is installed.
Creates a umd folder → Stores the converted .js files.
Transpiles .mjs → UMD → Uses Babel to convert module formats.
Skips minified files → Avoids re-processing .min.mjs.

Want to automate your JS build process further? Check out this BrowserStack.


Final Thoughts#

Now you can use PDF.js in any environment, even if it doesn’t support ES6 modules!

🔹 No manual conversion → Fully automated.
🔹 Works with the latest pdfjs-dist → Always up-to-date.
🔹 Reusable script → Run it anytime you update PDF.js.

And if you want to bundle your PDF.js output, Rollup’s guide to output formats is a great next read.
Next time you need UMD-compatible PDF.js, just run this script and you’re done!
Simplify the deployment of your Node.js applications Check out this nife.io.

How to Fix a WordPress Site Stuck at 33% Loading on AWS Lightsail

If your WordPress site hosted on AWS Lightsail freezes at 33% when loading, don’t panic—this is a common issue, often caused by a misbehaving plugin. Since Lightsail runs WordPress in a managed environment, plugin conflicts or performance bottlenecks can sometimes cause this problem.

In this guide, I’ll walk you through troubleshooting steps to identify and fix the problematic plugin so your site loads properly again.


Why Does WordPress Get Stuck at 33%?#

Confused user illustration for WordPress stuck

When your site hangs at 33%, it usually means WordPress is waiting for a response from a slow or failing process—often a plugin. This could happen because:

  • A plugin is conflicting with another plugin or theme
  • An outdated plugin is incompatible with your WordPress or PHP version
  • A resource-heavy plugin (like backup or SEO tools) is slowing things down
  • A buggy plugin is causing errors that prevent the page from loading

Since AWS Lightsail doesn’t provide direct error logs in the dashboard, we’ll need to manually check and disable plugins to find the culprit.


Step-by-Step Troubleshooting#

User navigating steps to troubleshoot WordPress issues

1. Access Your WordPress Site via SSH#

Since you can’t access the WordPress admin dashboard (because the site is stuck), you’ll need to log in to your Lightsail instance via SSH:

  1. Go to the AWS Lightsail console.
  2. Click on your WordPress instance.
  3. Under "Connect", click "Connect using SSH" (or use your own SSH client with the provided key).

Once connected, navigate to the plugins folder:

cd /opt/bitnami/apps/wordpress/htdocs/wp-content/plugins

More on SSH access in Lightsail

2. Temporarily Disable All Plugins#

To check if a plugin is causing the issue, we’ll disable all of them at once by renaming the plugins folder:

for plugin in $(ls); do mv "$plugin" "${plugin}_disabled"; done

This adds _disabled to each plugin’s folder name, making WordPress ignore them.

3. Check If Your Site Loads#

Checking if site loads correctly

After disabling plugins, refresh your WordPress site. If it loads normally, the problem is definitely plugin-related.

4. Re-enable Plugins One by One#

Now, we’ll re-enable plugins one at a time to find the troublemaker.

For example, to re-enable "Yoast SEO", run:

mv yoast-seo_disabled yoast-seo

After enabling each plugin, refresh your site. If it freezes again, the last plugin you enabled is likely the issue.

5. Clear Cache and Restart Services#

Sometimes, cached data can interfere. Clear the cache and restart your web server:

rm -rf /opt/bitnami/apps/wordpress/htdocs/wp-content/cache/*
sudo /opt/bitnami/ctlscript.sh restart apache

Bitnami restart commands

How to clear WordPress cache properly

This ensures changes take effect.

6. Fix or Replace the Problematic Plugin#

Once you’ve found the faulty plugin, you have a few options:

Update it – Check if a newer version is available.
Find an alternative – Some plugins have better alternatives.
Contact support – If it’s a premium plugin, reach out to the developer.

Find plugin alternatives on WordPress.org

Best practices for evaluating WordPress plugins


Final Thoughts#

A WordPress site freezing at 33% is frustrating, but the fix is usually straightforward—a misbehaving plugin. By disabling plugins via SSH and re-enabling them one by one, you can quickly identify the culprit.

Since AWS Lightsail doesn’t provide detailed debugging tools, this manual method is the most reliable way to troubleshoot. Once you find the problematic plugin, updating, replacing, or removing it should get your site back to normal.

Ask questions or share your experience on the Bitnami Community Forum

To deploy a static site or frontend framework (e.g., React, Vue, Angular), refer to the Nife Build File Deployment guide for configuring and uploading your build assets.

Check out our solutions on nife.io

Comparing and Debugging ORA vs. PG Stored Procedures: A Generic Example

Stored procedures are an essential component of relational databases that help with logic encapsulation, performance improvement, and process automation. Both Oracle (ORA) and PostgreSQL (PG) include stored procedure functionality, however they are very different from each other. The distinctions between PG and ORA stored procedures as well as debugging techniques will be covered in this blog post. A simple general example will be provided to demonstrate these distinctions.

1. Stored Procedures in Oracle vs. PostgreSQL: Key Differences#

Stored Procedures in Oracle

Despite being compatible with both ORA and PG, stored procedures differ significantly in terms of syntax, functionality, and debugging techniques. Let's look at the primary differences:

Syntax Differences#

Oracle (ORA):#

Oracle stored procedures are typically created using the CREATE PROCEDURE command and utilize PL/SQL, a procedural extension of SQL. They explicitly use IN, OUT, and IN OUT parameters and are wrapped in a BEGIN...END block.

PostgreSQL (PG):#

PostgreSQL uses PL/pgSQL for stored procedures and functions, which is similar to Oracles PL/SQL but differs in syntax and capabilities. In PG:

  • Stored procedures are created using CREATE PROCEDURE (introduced in version 11).
  • Functions are created using CREATE FUNCTION.
  • Unlike Oracle, PG does not support IN OUT parameters.

Example: A Generic Stored Procedure#

The following example determines whether a case belongs to a particular receiver type and sets an output flag appropriately.

Oracle (ORA) Example#

CREATE OR REPLACE PROCEDURE check_case_in_fips_othp(
p_case_id IN VARCHAR,
p_flag OUT CHAR,
p_msg OUT VARCHAR
) AS
BEGIN
SELECT 'S' INTO p_flag
FROM disbursements
WHERE case_id = p_case_id
AND recipient_type IN ('FIPS', 'OTHP');
IF p_flag IS NULL THEN
p_flag := 'N';
p_msg := 'No records found';
END IF;
EXCEPTION
WHEN OTHERS THEN
p_flag := 'F';
p_msg := 'Error: ' || SQLERRM;
END check_case_in_fips_othp;

PostgreSQL (PG) Example#

CREATE OR REPLACE PROCEDURE check_case_in_fips_othp(
IN p_case_id VARCHAR,
OUT p_flag CHAR,
OUT p_msg VARCHAR
)
LANGUAGE plpgsql
AS $$
BEGIN
-- Check if case exists
SELECT 'S' INTO p_flag
FROM disbursements
WHERE case_id = p_case_id
AND recipient_type IN ('FIPS', 'OTHP')
LIMIT 1;
IF NOT FOUND THEN
p_flag := 'N';
p_msg := 'No records found';
END IF;
EXCEPTION
WHEN OTHERS THEN
p_flag := 'F';
p_msg := 'Error: ' || SQLERRM;
END;
$$;

Key Differences in Syntax#

  • Procedure Declaration: Oracle explicitly defines IN, OUT, IN OUT parameter modes, whereas PostgreSQL only uses IN or OUT.
  • Exception Handling: Oracle uses EXCEPTION blocks with WHEN OTHERS THEN SQLERRM to capture errors, while PostgreSQL mainly relies on RAISE EXCEPTION.
  • Logic for No Data: Oracle explicitly checks for NULL, while PostgreSQL uses the FOUND condition.

2. Debugging Stored Procedures in ORA vs. PG#

Image representing debugging of stored procedures

Oracle (ORA) Debugging#

Example: Debugging with DBMS_OUTPUT#

DBMS_OUTPUT.PUT_LINE('The case flag is: ' || p_flag);

PostgreSQL (PG) Debugging#

  • Use RAISE NOTICE for debugging output.
  • Handle exceptions using RAISE EXCEPTION and log errors to a dedicated table.
  • PostgreSQL lacks an integrated debugger like Oracle SQL Developer, so debugging relies on logging and manual testing.

Example: Debugging with RAISE NOTICE#

RAISE NOTICE 'The case flag is: %', p_flag;

3. Conclusion#

conclusion

Despite having strong stored procedure functionality, Oracle and PostgreSQL differ greatly in syntax, error management, and debugging techniques. Heres a quick recap:

  • Syntax: Oracle explicitly defines IN OUT, OUT modes; PostgreSQL only uses IN and OUT.
  • Exception Handling: Oracle uses SQLERRM, while PostgreSQL relies on RAISE EXCEPTION.
  • Debugging: Oracle has more integrated tools like DBMS_OUTPUT, whereas PostgreSQL depends on RAISE NOTICE and logging.

By understanding these differences and using effective debugging techniques, you can become a more productive developer when working with Oracle or PostgreSQL stored procedures.

For deploying and managing databases efficiently, check out Nife.io, a cutting-edge platform that simplifies database deployment and scaling.

learn more about Database deployment Guide.

Further Reading:#

How to Handle PostgreSQL Cursors in Java: A Practical Guide

It could be a little difficult to use PostgreSQL cursors, especially if you need to use them in Java applications. If you've worked with relational databases and experimented with PL/SQL (Oracle's procedural language), you might recognise cursors. On the other hand, PostgreSQL handles and returns cursors in a different way.

This blog post will show you how to programmatically retrieve cursor data, interact with PostgreSQL cursors in Java, and give some real-world examples.

What's a Cursor Anyway?#

An illustration of a computer cursor with a question mark, representing the concept

Using a cursor, which is essentially a pointer, you can get rows from a query one at a time or in batches without putting the entire result set into memory all at once. Think of it as a way to handle large datasets without overtaxing your computer.

In a database, you often get all of the results at once when you run a query. By chunking the data or fetching rows at a time, a cursor can handle large amounts of data, improving performance and resource management.

It becomes interesting when you want to handle it in Java since a PostgreSQL method can return a cursor.

Setting Up a Cursor in PostgreSQL#

Step-by-step guide on setting up a cursor in PostgreSQL with example code

Lets start with a PostgreSQL function that returns a cursor. Well assume you have a table called employees with columns like employee_id, first_name, and salary. Heres a basic function that opens a cursor for this table:

CREATE OR REPLACE FUNCTION get_employee_cursor()
RETURNS REFCURSOR AS $$
DECLARE
emp_cursor REFCURSOR;
BEGIN
OPEN emp_cursor FOR
SELECT employee_id, first_name, salary
FROM employees;
RETURN emp_cursor;
END;
$$ LANGUAGE plpgsql;

This function get_employee_cursor opens a cursor for a simple SELECT query on the employees table and returns it.

How to Fetch the Cursor Data in Java#

To communicate with the database in Java, we can utilize JDBC (Java Database Connectivity). Because the function that returns the cursor is a callable function, you must use a CallableStatement when working with cursors in PostgreSQL. Here's how to accomplish that:

import java.sql.*;
public class CursorExample {
public static void main(String[] args) {
// Database connection details
String url = "jdbc:postgresql://localhost:5432/your_database";
String user = "your_user";
String password = "your_password";
try (Connection connection = DriverManager.getConnection(url, user, password)) {
// Enable transactions (required for cursors in PostgreSQL)
connection.setAutoCommit(false);
// Step 1: Call the function that returns a cursor
try (CallableStatement callableStatement = connection.prepareCall("{ ? = call get_employee_cursor() }")) {
callableStatement.registerOutParameter(1, Types.OTHER); // Cursor is of type "OTHER"
callableStatement.execute();
// Step 2: Retrieve the cursor
ResultSet resultSet = (ResultSet) callableStatement.getObject(1);
// Step 3: Iterate through the cursor and display results
while (resultSet.next()) {
int employeeId = resultSet.getInt("employee_id");
String firstName = resultSet.getString("first_name");
double salary = resultSet.getDouble("salary");
System.out.printf("Employee ID: %d, Name: %s, Salary: %.2f%n", employeeId, firstName, salary);
}
// Close the ResultSet
resultSet.close();
}
// Commit the transaction
connection.commit();
} catch (SQLException e) {
e.printStackTrace();
}
}
}

Breaking Down the Code#

A step-by-step breakdown of code logic with annotations

Connection Setup#

  • We connect to PostgreSQL using the DriverManager.getConnection() method.
  • connection.setAutoCommit(false) is crucial because cursors in PostgreSQL work within a transaction. By disabling auto-commit, we ensure the transaction is handled properly.

Calling the Cursor-Returning Function#

  • We use a CallableStatement to execute the function get_employee_cursor(), which returns a cursor. This is similar to calling a stored procedure in other databases.
  • We register the output parameter (the cursor) using registerOutParameter(1, Types.OTHER). In JDBC, cursors are treated as Types.OTHER.

Fetching Data from the Cursor#

  • Once the cursor is returned, we treat it like a ResultSet. The cursor essentially acts like a pointer that we can iterate over.
  • We loop through the result set using resultSet.next() and retrieve the data (like employee_id, first_name, and salary).

Commit the Transaction#

  • Since the cursor is part of a transaction, we commit the transaction after were done fetching and processing the data.

When Would You Use Cursors in Java?#

Managing Big Data Sets#

It could take a lot of memory to load all of your records at once if you have a lot of them—millions, for instance. By retrieving the data in chunks via a cursor, you may conserve memory.

Performance Optimization#

For large result sets, it is usually more efficient to fetch data in batches or row by row, which lessens the strain on your database and application.

Streaming Data#

Using cursors to get and process data in real time is a smart strategy when working with streams.

Final Thoughts#

Although using Java cursors in PostgreSQL might seem a bit more difficult than in Oracle, massive data sets can be efficiently managed with the right approach. By utilising CallableStatement to obtain the cursor and iterating over the result set, you may make full use of Java's cursors without encountering memory or performance issues.

Regardless of whether you're working with large datasets or need more exact control over how data is pulled from the database, cursors are a helpful addition to any PostgreSQL toolbox. Just be aware that, unlike Oracle, PostgreSQL requires the explicit retrieval of cursor data, but it is easy to comprehend and effective once you do.

For deploying and managing databases efficiently, check out Nife.io, a cutting-edge platform that simplifies database deployment and scaling.

learn more about Database deployment Guide.

For more details, check out the official PostgreSQL documentation on Cursors.

How to Open Ports on Your EC2 Instance Using UFW (Uncomplicated Firewall)

If you've ever worked with AWS EC2 instances, you know that keeping your instance secure is crucial. One way to do this is by managing your firewall, and in this blog post, well go over how to configure UFW (Uncomplicated Firewall) on your EC2 instance to allow specific ports—like SSH (port 22), MySQL (port 3306), and HTTP (port 80)—so you can connect to your instance and run services smoothly.

Why Use UFW?#

Illustration highlighting the importance of using UFW

On Ubuntu and other Debian-based systems, UFW is a straightforward command-line interface for controlling firewall rules. Because it is easy to set up and still provides a high degree of security, it is ideal for EC2 instances. Allowing the traffic you require while keeping unnecessary ports open to the internet is the aim here.

Prerequisites#

Before diving in, make sure:

  • Your EC2 instance is running Ubuntu or another Debian-based Linux distribution.
  • You have SSH access to the instance.
  • UFW is installed (well check and install it if necessary).

Step-by-Step Guide to Open Ports#

Step-by-step guide on how to open ports

1. Check if UFW is Installed#

First, let's check if UFW is installed on your EC2 instance. Connect to your EC2 instance and run:

sudo ufw status

If UFW is not installed, the command will return:

ufw: command not found

In that case, install it with:

sudo apt update
sudo apt install ufw

2. Allow Specific Ports#

Now, let's open the ports you need:

# Allow SSH (port 22)
sudo ufw allow 22
# Allow MySQL (port 3306)
sudo ufw allow 3306
# Allow HTTP (port 80)
sudo ufw allow 80

These commands let traffic through on the specified ports, ensuring smooth access to your instance.

3. Enable UFW#

If UFW is not already enabled, activate it by running:

sudo ufw enable

To verify, check the status:

sudo ufw status

You should see:

To Action From
-- ------ ----
22 ALLOW Anywhere
3306 ALLOW Anywhere
80 ALLOW Anywhere

4. Optional: Restrict Access to Specific IPs#

You may want to restrict access to particular IPs for extra security. For instance, to only permit SSH from your IP:

sudo ufw allow from 203.0.113.0 to any port 22

You can do the same for MySQL and HTTP:

sudo ufw allow from 203.0.113.0 to any port 3306
sudo ufw allow from 203.0.113.0 to any port 80

This adds an extra layer of security by preventing unwanted access.

5. Verify Your Firewall Rules#

Run the following command to check active rules:

sudo ufw status

This confirms which ports are open and from which IPs they can be accessed.

Troubleshooting Common Issues#

Guide to troubleshooting common issues

Can't Connect via SSH?#

If you cant connect to your EC2 instance via SSH after enabling UFW, make sure port 22 is open:

sudo ufw allow 22

Also, check your AWS Security Group settings and ensure SSH is allowed. You can review AWS security group rules here.

Can't Connect to MySQL?#

Ensure port 3306 is open and verify that your database allows remote connections.

Web Traffic Not Reaching the Instance?#

Check if port 80 is open and confirm that your EC2 security group allows inbound HTTP traffic.

Conclusion#

You now know how to use UFW to open particular ports on your EC2 instance, enabling HTTP, MySQL, and SSH communication while restricting access to unwanted ports. This keeps your server safe while guaranteeing that critical services run correctly.

Related Reads#

Want to dive deeper into AWS and cloud automation? Check out these blogs:

Automating Deployment and Scaling in Cloud Environments like AWS and GCP
Learn how to streamline your deployment processes and scale efficiently across cloud platforms like AWS and GCP.

Unleash the Power of AWS DevOps Tools to Supercharge Software Delivery
Explore the tools AWS offers to enhance your software delivery pipeline, improving efficiency and reliability.

Step-by-Step Guide to Multi-Cloud Automation with SkyPilot on AWS Step-by-Step Guide to Multi-Cloud Automation with SkyPilot on AWs