File and Directory Operations
# List files (long format, human-readable sizes, show hidden files)
ls -alh
# Create a new directory
mkdir my_project
# Create an empty file or update the timestamp of an existing file
touch notes.txt
# Copy a file
cp notes.txt backup.txt
# Move or rename a file/directory
mv backup.txt archive/
mv old_name.txt new_name.txt
# Remove a file
rm old.txt
# Remove a directory and its contents recursively
rm -r old_directory/
Viewing and Editing Files
# Display the entire content of a file
cat server.log
# View file content page by page (use 'q' to quit)
less server.log
# View the beginning of a file (default 10 lines, use -n for specific count)
head server.log
head -n 20 server.log
# View the end of a file (default 10 lines, use -n for specific count)
tail server.log
tail -n 50 server.log
# Continuously monitor the end of a file (useful for logs)
tail -f server.log
# Edit a file using a text editor (e.g., vi or nano)
# 'sudo' is often required for system configuration files
sudo vi /etc/systemd/system/elastic.service
nano my_script.sh
File Permissions and Ownership
# Change file permissions (e.g., make a script executable)
# 755: Owner=rwx, Group=r-x, Others=r-x
chmod 755 script.sh
# Change the owner and group of a file or directory
# Use 'sudo' if you are not the current owner or root
sudo chown hadoop:hadoop /opt/data/file.csv
# Recursively change ownership of a directory and its contents
sudo chown -R hadoop:hadoop /opt/data/
Managing Services with systemctl
# Reload systemd configuration after editing a service file
sudo systemctl daemon-reload
# Enable a service to start automatically on system boot
sudo systemctl enable elastic.service
# Disable a service from starting automatically on system boot
sudo systemctl disable elastic.service
# Start a service immediately
sudo systemctl start elastic.service
# Stop a service immediately
sudo systemctl stop elastic.service
# Restart a service
sudo systemctl restart elastic.service
# Check the current status of a service
sudo systemctl status elastic.service
# View logs for a specific service (use -b for logs from the current boot)
sudo journalctl -u elastic.service
sudo journalctl -u elastic.service -b
# Follow service logs in real-time
sudo journalctl -f -u elastic.service
Example: Custom systemd Service File
Location: Typically /etc/systemd/system/your-service-name.service
or /usr/lib/systemd/system/your-service-name.service
# Example command to edit the file
sudo vi /usr/lib/systemd/system/elastic.service
Configuration file
[Unit]
Description=Elasticsearch Service
After=network.target # Ensures network is up before starting
[Service]
# RuntimeDirectory=elasticsearch # systemd creates this directory if specified
User=hadoop
Group=hadoop
WorkingDirectory=/home/hadoop/elasticsearch
Environment=ES_HOME=/home/hadoop/elasticsearch
Environment=ES_PATH_CONF=/home/hadoop/elasticsearch/config
# Example heap size environment variables (adjust as needed)
Environment=ES_HEAP_SIZE=512M
# Environment=ES_JAVA_OPTS="-Xmx2g -Xms2g" # More specific Java options
ExecStart=/home/hadoop/elasticsearch/bin/elasticsearch
Restart=always # Restart the service if it crashes
RestartSec=10 # Wait 10 seconds before restarting
Type=simple # Assumes the main process doesn't fork
[Install]
WantedBy=multi-user.target # Standard target for multi-user runlevel
Network and Firewall Tools
UFW – Uncomplicated Firewall Rules
# Check UFW status (enabled/disabled) and list rules
sudo ufw status verbose
# Enable the firewall
sudo ufw enable
# Disable the firewall
sudo ufw disable
# Allow incoming traffic on a specific TCP port
sudo ufw allow 9200/tcp
sudo ufw allow 80/tcp # HTTP
sudo ufw allow 443/tcp # HTTPS
# Allow incoming traffic on a specific UDP port
sudo ufw allow 53/udp # DNS
# Deny traffic on a port
sudo ufw deny 3000
# Delete a rule (use 'status numbered' to get rule numbers)
sudo ufw status numbered
sudo ufw delete 1 # Deletes rule number 1
# Allow traffic from a specific IP address
sudo ufw allow from 192.168.1.100
# Allow traffic from a specific IP address to a specific port
sudo ufw allow from 192.168.1.100 to any port 22 proto tcp
curl
and wget
– Download Files and Test APIs
# Make a GET request to a URL (e.g., test a web server or API)
curl http://localhost:9200/
curl [https://api.example.com/v1/status](https://api.example.com/v1/status)
# Download a file using wget
wget [https://example.com/archive.zip](https://example.com/archive.zip)
# Download a file using curl (specify output file with -o or -O)
curl -o downloaded_file.zip [https://example.com/archive.zip](https://example.com/archive.zip) # Save as specified name
curl -O [https://example.com/archive.zip](https://example.com/archive.zip) # Save using the remote filename
Network Inspection
netstat
– View Network Connections, Listening Ports, etc. (Legacy)
# Show all listening TCP and UDP ports, associated programs, and PIDs
# -t: TCP, -u: UDP, -l: Listening, -n: Numeric addresses/ports, -p: Program/PID, -e: Extended info
sudo netstat -tulnpe
# Filter output (e.g., check if Elasticsearch is listening on port 9200)
sudo netstat -tulnpe | grep 9200
(Note: netstat
is considered legacy; ss
is preferred on modern systems.)
ss
– Modern Socket Statistics Tool (netstat Replacement)
# Show all listening TCP and UDP sockets
# -t: TCP, -u: UDP, -l: Listening, -n: Numeric, -p: Processes
ss -tulnp
# Filter output (e.g., verify Kibana is running on port 5601)
ss -tulnp | grep 5601
# Show all established TCP connections
ss -tn
Archiving and Extraction
tar
– Create and Extract Tape Archives (.tar
, .tar.gz
, .tar.bz2
)
# Create a compressed archive (gzip) from a directory
# -c: Create, -z: gzip, -v: Verbose, -f: File
tar -czvf logs-backup.tar.gz /var/log/
# Create a compressed archive (bzip2) - often better compression
# -j: bzip2
tar -cjvf archive.tar.bz2 directory/
# Extract a gzip archive
# -x: Extract
tar -xzvf logs.tar.gz
# Extract a bzip2 archive
tar -xjvf archive.tar.bz2
# Extract to a specific target directory
tar -xzvf logs.tar.gz -C /path/to/extract/destination/
Example: Backup Nginx configuration and specific systemd service files:
tar -czvf config-backup-$(date +%Y%m%d).tar.gz /etc/nginx /etc/systemd/system/my-app.service /etc/systemd/system/elastic.service
unzip
– Extract .zip
Archives
# Extract files from a zip archive to the current directory
unzip data.zip
# Extract files to a specified target directory
# -d: Destination directory
unzip data.zip -d ~/datasets/
# List contents of a zip file without extracting
unzip -l data.zip
Example: After downloading a zipped application:
wget [https://example.com/app-v1.2.zip](https://example.com/app-v1.2.zip)
unzip app-v1.2.zip -d /opt/
Process and System Monitoring
# List currently running processes (various formats)
ps aux # BSD format, shows all users' processes
ps -ef # System V format, shows full command line
# Filter processes (e.g., find Java processes)
ps aux | grep java
# Kill a process using its Process ID (PID)
# Find the PID first using ps or pgrep
pgrep nginx # Get PID of nginx process
kill <PID> # Sends SIGTERM (graceful shutdown)
kill -9 <PID> # Sends SIGKILL (force kill - use with caution)
# Kill processes by name
pkill nginx # Sends SIGTERM to all processes named nginx
killall nginx # Similar to pkill
# Display dynamic real-time view of system resource usage (CPU, Memory)
# Press 'q' to quit, 'h' for help
top
# An enhanced interactive process viewer (often needs installation: sudo apt install htop / sudo yum install htop)
htop
Background Jobs and Scheduling
# Run a command or script in the background
./long-running-script.sh &
# List background jobs started in the current shell
jobs
# Bring a background job to the foreground (e.g., job number 1)
fg %1
# Put the current foreground job into the background (Press Ctrl+Z, then type bg)
# Example:
# ./my-server
# (Press Ctrl+Z)
# bg
# Allow a background job to continue running even after logging out
disown %1 # Disown job number 1
disown # Disown the most recently backgrounded job
# Alternatively, use 'nohup' when starting the command:
nohup ./long-running-script.sh &
# View scheduled cron jobs for the current user
crontab -l
# Edit scheduled cron jobs for the current user (opens default editor)
crontab -e
# Schedule a command to run once at a later time using 'at'
# (May require installing 'at': sudo apt install at / sudo yum install at)
# Example: Run 'reboot' command in 2 minutes
echo "sudo reboot" | at now + 2 minutes
# List pending 'at' jobs
atq
# Remove an 'at' job (use the job number from atq)
atrm <job_number>
SSH and Remote Operations
Basic SSH Connection
# Connect to a remote server using a username and hostname/IP address
ssh username@remote_host
# Connect using a specific port (if not the default port 22)
ssh -p 2222 username@remote_host
# Connect using a specific private key file
ssh -i /path/to/private_key username@remote_host
Secure Copy (SCP) - Transfer Files/Directories over SSH
# Upload: Copy a local file to a remote server
scp /path/to/local/file.txt username@remote_host:/remote/path/
# Download: Copy a file from a remote server to local machine
scp username@remote_host:/remote/path/file.txt /local/path/
# Upload: Copy an entire local directory recursively to remote server
# -r: Recursive
scp -r /path/to/local/directory/ username@remote_host:/remote/path/
# Download: Copy an entire remote directory recursively to local machine
scp -r username@remote_host:/remote/path/directory/ /local/path/
# Specify port for SCP (use -P, uppercase P)
scp -P 2222 /local/file.txt username@remote_host:/remote/path/
# Specify identity file (private key) for SCP
scp -i /path/to/private_key /local/file.txt username@remote_host:/remote/path/
SCP Examples:
# Upload a configuration backup archive to the remote user's home directory
scp config-backup-20250504.tar.gz hadoop@192.168.1.50:~/backups/
# Download server log files from a remote machine to the current local directory (.)
scp hadoop@192.168.1.50:/var/log/nginx/access.log .
# Upload the entire 'my_project' directory to the remote home directory
scp -r my_project/ hadoop@192.168.1.50:~/
# Download the remote 'data' directory into a local 'data_backup' directory
scp -r hadoop@192.168.1.50:~/data/ ./data_backup/
SSH Port Forwarding (Tunneling)
Local Port Forwarding
Purpose: Access a service running on the remote machine (or its network) as if it were running on your local machine.
# Forward connections from local port <local_port> to <remote_service_host>:<remote_service_port>
# via the SSH connection to <username>@<remote_host>
ssh -L <local_port>:<remote_service_host>:<remote_service_port> username@remote_host
# Example: Access remote Elasticsearch (running on localhost:9200 on the remote server)
# via your local machine's localhost:8080
ssh -L 8080:localhost:9200 user@remote-server.com
# Now, access http://localhost:8080 in your local browser/curl to reach remote Elasticsearch
Explanation: Connections made to localhost:8080
on your machine are securely tunneled through the SSH connection to remote-server.com
, which then connects to localhost:9200
from its own perspective.
Remote Port Forwarding
Purpose: Expose a service running on your local machine to the remote machine (or its network).
# Forward connections arriving at <remote_host>'s <remote_port>
# to your local machine's <local_service_host>:<local_service_port>
ssh -R <remote_port>:<local_service_host>:<local_service_port> username@remote_host
# Example: Make your local development web server (running on localhost:3000)
# accessible on the remote server via port 8080
ssh -R 8080:localhost:3000 user@remote-server.com
# Now, someone on remote-server.com (or its network, depending on GatewayPorts setting)
# can potentially access [http://remote-server.com:8080](http://remote-server.com:8080) to reach your local localhost:3000
Use Case: Useful for temporarily exposing a local development service to a remote collaborator or server for testing. Be mindful of security implications.