Working with text line-by-line is one of those tasks that comes up constantly — cleaning exported data, deduplicating a list of domains, sorting log entries, reversing the order of a stack trace. On the command line, sort, uniq, and awk handle this well. In the browser, without a terminal session available, an online line tool is the fastest path.
Sort, deduplicate, and clean lines online →
Common Line Operations
Sort Lines Alphabetically
Sorting a list of items is one of the most common text manipulation tasks. Use cases:
- Normalize a list of imports or dependencies before code review
- Sort a
.gitignoreorhostsfile for readability - Alphabetize a word list or glossary
- Order environment variable names in a
.envfile
The equivalent shell command:
sort input.txt
For case-insensitive sort:
sort -f input.txt
For reverse order:
sort -r input.txt
Remove Duplicate Lines
Duplicate lines accumulate naturally: merging two lists, appending to a log, exporting from multiple sources. Removing duplicates is typically the first step in cleaning the data.
Shell equivalent (preserves order with awk):
# Remove adjacent duplicates only (requires sorted input)
sort input.txt | uniq
# Remove all duplicates, preserve first occurrence (order-preserving)
awk '!seen[$0]++' input.txt
The awk version is more useful in practice because it preserves the original order of first occurrence without requiring a sort step.
Reverse Line Order
Reversing line order is useful when you want the newest entries at the top of a chronological log, or when you need to reverse a stack trace to read from bottom up.
# macOS
tail -r input.txt
# Linux
tac input.txt
Remove Blank Lines
Blank lines often appear when extracting text from PDFs, HTML, or CSVs. Stripping them out gives a cleaner working set.
grep -v '^$' input.txt
Or with sed:
sed '/^[[:space:]]*$/d' input.txt
Trim Whitespace
Leading and trailing whitespace in a list causes hidden problems — two entries that look identical may not match in a join, deduplication, or lookup.
sed 's/^[[:space:]]*//;s/[[:space:]]*$//' input.txt
Real Developer Use Cases
Deduplicating a dependency list
After merging requirements from multiple contributors, duplicates are common:
Before:
requests==2.31.0
numpy==1.26.0
requests==2.31.0
pandas==2.1.0
numpy==1.26.0
After deduplication:
requests==2.31.0
numpy==1.26.0
pandas==2.1.0
Sorting hosts file entries
A hosts file or DNS blocklist with thousands of entries is easier to maintain sorted:
0.0.0.0 ads.example.com
0.0.0.0 analytics.example.com
0.0.0.0 tracking.example.com
Reviewing sorted entries makes it easier to spot duplicates and find specific domains.
Cleaning log exports
When you export logs from Datadog, Grafana Loki, or Splunk and paste into a text editor, you often get duplicate lines (from overlapping time windows) or extra blank lines. Running deduplicate + remove-blank-lines gives you a clean working set.
Normalizing API test data
A list of test IDs or test endpoints accumulated over time often has duplicates and inconsistent ordering. Sort + deduplicate before checking into version control keeps the list consistent across team members.
Processing clipboard data
You’re copying a column from a spreadsheet, a list of IDs from a database query result, or a block of text from a PDF. The raw paste often needs:
- Trim whitespace (column values may have padding)
- Remove blank lines (empty rows in the selection)
- Deduplicate (pivot table may repeat values)
- Sort (for readability or comparison)
Command Line Reference
For those who prefer the terminal, here’s a reference for line operations:
| Operation | Command |
|---|---|
| Sort alphabetically | sort file.txt |
| Sort case-insensitive | sort -f file.txt |
| Sort reverse | sort -r file.txt |
| Sort numerically | sort -n file.txt |
| Remove adjacent duplicates | sort file.txt | uniq |
| Remove all duplicates (preserve order) | awk '!seen[$0]++' file.txt |
| Count duplicates | sort file.txt | uniq -c | sort -rn |
| Reverse line order | tac file.txt (Linux) / tail -r file.txt (macOS) |
| Remove blank lines | grep -v '^$' file.txt |
| Trim whitespace | sed 's/^[[:space:]]*//;s/[[:space:]]*$//' file.txt |
| Count lines | wc -l file.txt |
When to Use a Browser Tool vs the Terminal
The terminal is more powerful for large files and automation. A browser tool wins when:
- You’re on a machine without a terminal (Windows without WSL, remote desktop)
- The data came from a browser context (copied from a web page, form output, API response)
- You want a quick one-off operation without opening a terminal window
- You’re sharing the workflow with a non-technical teammate who needs to do the same thing
For repeated operations on the same data pipeline, script it. For one-time cleanup, paste it into a browser tool and move on.
Process Lines in Your Code
For programmatic line processing in common languages:
JavaScript
const text = `apple\nbanana\napple\ncherry\nbanana`;
const lines = text.split('\n');
const sorted = [...lines].sort();
const unique = [...new Set(lines)];
const reversed = [...lines].reverse();
const noBlank = lines.filter(line => line.trim() !== '');
Python
text = "apple\nbanana\napple\ncherry\nbanana"
lines = text.splitlines()
sorted_lines = sorted(lines)
unique_lines = list(dict.fromkeys(lines)) # preserves order
reversed_lines = list(reversed(lines))
no_blank = [l for l in lines if l.strip()]
Go
lines := strings.Split(text, "\n")
// Sort
sort.Strings(lines)
// Deduplicate (preserving order)
seen := make(map[string]bool)
unique := []string{}
for _, line := range lines {
if !seen[line] {
seen[line] = true
unique = append(unique, line)
}
}
Clean Lines Without a Terminal
For quick text line operations in the browser — no command line required:
Open Line Tools → — paste your text, choose sort, deduplicate, reverse, or trim, and copy the result.