The complete guide to CSV imports in Laravel | Tapix                  [ ![Tapix](/img/tapix-logo-light.svg) ![Tapix](/img/tapix-logo-dark.svg) ](https://tapix.dev)   Try Demo  [Pre-order for $59 $99](https://tapix.dev#pricing)

    Try Demo  [Pre-order for $59 $99](https://tapix.dev#pricing)

   ![Tapix](https://tapix.dev/img/tapix-logo-light.svg)

 TutorialsThe complete guide to CSV imports in Laravel
============================================

 tapix.dev/blog

  [    Back to blog ](https://tapix.dev/blog) [ Tutorials ](https://tapix.dev/blog/category/tutorials)

The complete guide to CSV imports in Laravel
============================================

 Manch Minasyan ·  April 12, 2026  · 13 min read

 Every Laravel application eventually needs a CSV import. A client hands you a spreadsheet of 10,000 contacts. A migration from a legacy system arrives as a zip file of exports. An operations team wants to bulk-update inventory from their procurement tool. The need is universal, but the right approach depends entirely on your constraints.

This guide covers four approaches to building a Laravel CSV import, from the simplest 20-line script to purpose-built import wizards. Each comes with real code, honest tradeoffs, and guidance on when it fits.

[\#](#the-naive-approach-fopen--fgetcsv "Permalink")The naive approach: fopen + fgetcsv
---------------------------------------------------------------------------------------

When the requirement first lands, most developers reach for PHP's built-in [`fgetcsv`](https://www.php.net/manual/en/function.fgetcsv.php). It works, and it ships fast:

```
class CsvImportController extends Controller
{
    public function store(Request $request)
    {
        $request->validate(['csv' => 'required|file|mimes:csv,txt']);

        $handle = fopen($request->file('csv')->getPathname(), 'r');
        $headers = fgetcsv($handle); // first row = column names

        while (($row = fgetcsv($handle)) !== false) {
            $data = array_combine($headers, $row);

            Contact::create([
                'first_name' => $data['first_name'],
                'last_name'  => $data['last_name'],
                'email'      => $data['email'],
            ]);
        }

        fclose($handle);

        return back()->with('success', 'Import complete.');
    }
}

```

Seventeen lines of actual logic. For a one-off script importing 500 rows on a Tuesday afternoon, this is fine. Ship it.

But here is what breaks when the file gets real:

- **Memory**: `fgetcsv` reads line by line, but `Contact::create()` inside a loop keeps Eloquent models in memory. At 50,000 rows, your PHP worker runs out.
- **Encoding**: The client exported from Excel on Windows. The file is Windows-1252, not UTF-8. Your `array_combine` silently produces garbled names.
- **Column mismatches**: The CSV has "First Name" with a space. Your code expects `first_name` with an underscore. The import silently inserts nulls.
- **No validation**: Row 3,847 has "N/A" in the email column. You get a database constraint violation and the entire import dies.
- **No feedback**: The user clicks "Import" and stares at a loading spinner for 90 seconds. If it times out, they have no idea how many rows made it.
- **No relationships**: The CSV has a "Company" column. You need to find-or-create the company and set the foreign key. That logic does not fit neatly into this loop.

Every production CSV import eventually needs to solve all six of these problems. The question is whether you build those solutions yourself or reach for a package that already has.

[\#](#approach-1-diy-laravel-csv-import-with-building-blocks "Permalink")Approach 1: DIY Laravel CSV import with building blocks
--------------------------------------------------------------------------------------------------------------------------------

If you want full control, Laravel gives you the pieces to build a solid import pipeline without any third-party packages. The tradeoff is time. You will need to assemble file handling, CSV parsing, validation, queue processing, and error reporting from scratch. Each piece is straightforward on its own. The complexity comes from wiring them together correctly and handling the edge cases that surface once real users upload real files.

### [\#](#file-upload-and-validation "Permalink")File upload and validation

Start with a form request and proper MIME validation:

```
class ImportContactsRequest extends FormRequest
{
    public function rules(): array
    {
        return [
            'csv' => ['required', 'file', 'mimes:csv,txt', 'max:10240'],
        ];
    }
}

```

### [\#](#parsing-with-leaguecsv "Permalink")Parsing with League\\Csv

For reliable parsing that handles encoding, BOM markers, and edge cases in quoted fields, [League\\Csv](https://csv.thephpleague.com/) is the standard:

```
use League\Csv\Reader;
use League\Csv\Statement;

$csv = Reader::createFromPath($path, 'r');
$csv->setHeaderOffset(0);

$records = Statement::create()->process($csv);

foreach ($records as $record) {
    // $record is an associative array keyed by headers
    ProcessContactRow::dispatch($record);
}

```

### [\#](#background-processing "Permalink")Background processing

Any file over a few hundred rows should be processed in a queue job. Dispatch a job per row or batch them:

```
class ProcessContactRow implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function __construct(
        private readonly array $data,
    ) {}

    public function handle(): void
    {
        $validated = Validator::make($this->data, [
            'email' => ['required', 'email'],
            'first_name' => ['required', 'string'],
        ])->validate();

        Contact::updateOrCreate(
            ['email' => $validated['email']],
            $validated,
        );
    }
}

```

This works. But count what you have written so far: a form request, a controller, a parser, a queue job, and validation rules. You are at roughly 200 lines of code and you still have not built:

- A column mapping interface (what if the CSV says "Email Address" instead of "email"?)
- Error reporting that tells the user which rows failed and why
- The ability for users to correct invalid values and retry
- Relationship resolution (linking a "Company" column to a `companies` table)
- Progress feedback during processing
- Duplicate detection and merge logic

Each of those is another 50-200 lines. For a queue-powered import pipeline that handles 100k+ rows, the DIY approach works but demands significant upfront investment.

[\#](#approach-2-laravel-excel-maatwebsite "Permalink")Approach 2: Laravel Excel (Maatwebsite)
----------------------------------------------------------------------------------------------

[Laravel Excel](https://docs.laravel-excel.com/3.1/imports/) is the most popular import/export package in the ecosystem, with over 145 million downloads. For good reason: it reduces the parsing and model-creation boilerplate to almost nothing.

Install and generate an import class:

```
composer require maatwebsite/excel
php artisan make:import ContactImport --model=Contact

```

A basic import with heading row support:

```
use App\Models\Contact;
use Maatwebsite\Excel\Concerns\ToModel;
use Maatwebsite\Excel\Concerns\WithHeadingRow;
use Maatwebsite\Excel\Concerns\WithValidation;

class ContactImport implements ToModel, WithHeadingRow, WithValidation
{
    public function model(array $row): Contact
    {
        return new Contact([
            'first_name' => $row['first_name'],
            'last_name'  => $row['last_name'],
            'email'      => $row['email'],
            'phone'      => $row['phone'] ?? null,
        ]);
    }

    public function rules(): array
    {
        return [
            'email'      => ['required', 'email'],
            'first_name' => ['required', 'string'],
        ];
    }
}

```

For large files, add chunked reading and queue processing:

```
use Maatwebsite\Excel\Concerns\WithChunkReading;
use Illuminate\Contracts\Queue\ShouldQueue;

class ContactImport implements ToModel, WithHeadingRow, WithChunkReading, ShouldQueue
{
    public function chunkSize(): int
    {
        return 1000;
    }

    // ...
}

```

Then in your controller:

```
Excel::import(new ContactImport, $request->file('csv'));

```

### [\#](#where-laravel-excel-fits "Permalink")Where Laravel Excel fits

This is hard to beat for developer experience on straightforward imports. Three concerns, one class, and your CSV rows become Eloquent models. The `WithChunkReading` concern prevents memory exhaustion on large files by reading the spreadsheet in segments rather than loading everything at once. Combined with `ShouldQueue`, you get background processing without writing any job classes.

Laravel Excel is excellent when:

- You control the file format (internal tools, system-to-system transfers)
- Column names are predictable and consistent
- You do not need a user-facing UI beyond a file upload button
- The import is fire-and-forget with failures logged to a table

### [\#](#where-it-falls-short "Permalink")Where it falls short

Laravel Excel is a parser with Eloquent bindings. It deliberately stays out of the UI business. That means:

- **No column mapping interface**: If the CSV says "E-mail" and your field is `email`, the import silently skips it. Users cannot fix this themselves.
- **No error correction UI**: Failed rows are collected via `WithValidation` and `SkipsOnFailure`, but the user gets a list of errors. They cannot edit the offending cell and retry.
- **Limited relationship handling**: Linking a "Company Name" column to a `companies` foreign key requires custom code in your `model()` method for every relationship.
- **No multi-step wizard**: The UX is upload-and-pray. There is no review step where users verify the mapping looks right before committing.

For a deeper comparison, see the Laravel Excel vs Tapix breakdown.

[\#](#approach-3-filaments-built-in-import-action "Permalink")Approach 3: Filament's built-in Import Action
-----------------------------------------------------------------------------------------------------------

If you are already using [Filament](https://filamentphp.com/docs/5.x/actions/import), its built-in Import Action gives you a column mapping modal and queue processing out of the box.

Add an import action to a list page:

```
use Filament\Actions\ImportAction;

class ListContacts extends ListRecords
{
    protected function getHeaderActions(): array
    {
        return [
            ImportAction::make()
                ->importer(ContactImporter::class),
        ];
    }
}

```

Define the importer:

```
use Filament\Actions\Imports\ImportColumn;
use Filament\Actions\Imports\Importer;
use Filament\Actions\Imports\Models\Import;

class ContactImporter extends Importer
{
    protected static ?string $model = Contact::class;

    public static function getColumns(): array
    {
        return [
            ImportColumn::make('first_name')
                ->requiredMapping()
                ->rules(['required']),

            ImportColumn::make('last_name')
                ->requiredMapping()
                ->rules(['required']),

            ImportColumn::make('email')
                ->rules(['required', 'email']),
        ];
    }

    public function resolveRecord(): ?Contact
    {
        return Contact::firstOrNew([
            'email' => $this->data['email'],
        ]);
    }
}

```

### [\#](#what-filament-import-action-does-well "Permalink")What Filament Import Action does well

- **Column mapping modal**: Users see their CSV headers alongside your field definitions and can manually adjust the mapping. This alone prevents the most common import failure.
- **Quick setup**: If you are already in Filament, adding an import button is a 10-minute task.
- **Queue integration**: Files are chunked and processed via queue jobs. You get progress notifications.

### [\#](#where-it-reaches-its-limits "Permalink")Where it reaches its limits

Filament's import was designed as a convenience feature, not a dedicated import system. The boundaries show up in production:

- **UTF-8 only**: Files with other encodings need pre-processing. Excel exports from Windows frequently use Windows-1252.
- **Error handling is after-the-fact**: Failed rows are exported to a downloadable CSV. Users cannot correct values inline and re-process. They have to fix the source file and re-upload.
- **Limited relationship support**: You can implement relationship resolution in `resolveRecord()`, but there is no built-in UI for mapping a column to a related entity with find-or-create behavior.
- **No multi-step review**: The flow is upload, confirm mapping, import. There is no validation preview step where users see errors and fix them before committing.
- **Single mapping modal**: Complex imports with 20+ columns and relationship fields outgrow the single-modal interface.

For a detailed breakdown of when Filament's built-in import is sufficient and when you need more, see the Filament Import Action comparison.

[\#](#what-production-laravel-csv-imports-actually-need "Permalink")What production Laravel CSV imports actually need
---------------------------------------------------------------------------------------------------------------------

After building import features across dozens of Laravel applications, the requirements converge on the same set. Here is the gap between what the approaches above provide and what real users demand.

### [\#](#column-mapping-with-smart-defaults "Permalink")Column mapping with smart defaults

Users should not have to manually map every column. A good import system auto-detects that "E-mail Address" maps to your `email` field, that "First Name" and "fname" both map to `first_name`, and that "Phone Number" maps to `phone`. The user reviews the suggested mapping and corrects only what the system got wrong.

This means your fields need aliases. In Tapix, an `ImportField` definition carries a `guess()` list:

```
ImportField::make('first_name')
    ->required()
    ->guess(['first name', 'first', 'given name', 'fname']),

ImportField::make('email')
    ->type(FieldType::Email)
    ->guess(['email', 'email address', 'e-mail']),

```

The `ColumnMapper` normalizes both the CSV header and the guess list (lowercased, spaces/dashes/underscores treated as equivalent), then matches automatically. Users only intervene when the auto-mapping misses.

For more on the UX patterns behind this, see the CSV column mapping UX patterns post.

### [\#](#inline-validation-with-error-correction "Permalink")Inline validation with error correction

Validation that dumps 200 error messages after import is useless. Users need to see validation errors on a per-row, per-cell basis before any data touches the database. They need to click on "N/A" in the email column, replace it with a real address, and mark the row as fixed.

This requires storing raw CSV data, running validation as a separate step, and persisting both the errors and the corrections. The review step becomes the critical UI where data quality problems get resolved.

The details of building this correctly are covered in the Handling CSV validation errors post.

### [\#](#relationship-linking "Permalink")Relationship linking

The hardest part of most imports is not the flat fields. It is the "Company" column that needs to resolve to a `company_id` foreign key. The system needs to:

1. Search existing companies by name
2. Let the user confirm or reject matches
3. Optionally create new companies for unmatched values
4. Handle the find-or-create decision per relationship, not globally

This is where the EntityLink system and `MatchBehavior` enum (MatchOnly, MatchOrCreate, Create) become essential. A field definition carries its relationship configuration:

```
ImportField::make('company')
    ->label('Company')
    ->guess(['company', 'company name', 'organization', 'org'])
    ->relationship(
        name: 'company',
        model: Company::class,
        matchBy: ['name'],
        behavior: MatchBehavior::MatchOrCreate,
    ),

```

The system resolves the relationship during execution, deduplicating across rows so you do not end up with 500 duplicate "Acme Corp" records from a file that mentions the same company on every row.

For a full treatment of relationship resolution patterns, see the Importing relational data from CSV in Laravel post.

### [\#](#background-processing-with-live-progress "Permalink")Background processing with live progress

Any import over a few hundred rows must run in the background. But "fire and forget" is not acceptable UX. Users need:

- A progress indicator showing rows processed out of total
- Live status updates (processing, completed, failed)
- The ability to navigate away and come back without losing the import

This means the import state lives in the database, not in memory. The queue job processes rows in chunks (500 by default), updates progress counters after each chunk, and broadcasts events that the frontend can consume.

### [\#](#multi-tenancy "Permalink")Multi-tenancy

In multi-tenant applications, every query in the import pipeline must be scoped to the current tenant. That includes the initial duplicate check, relationship resolution, and the final insert. The tenant context needs to flow from the HTTP request through to the queue worker, which runs in a completely different process.

[\#](#choosing-your-approach "Permalink")Choosing your approach
---------------------------------------------------------------

RequirementfgetcsvDIYLaravel ExcelFilament ImportQuick one-off importYesOverkillYesYesLarge files (50k+ rows)NoWith workYesYesColumn mapping UINoBuild itNoBasic modalInline error correctionNoBuild itNoNoRelationship linkingNoBuild itManualManualQueue processingNoBuild itYesYesMulti-step wizardNoBuild itNoNo**Use `fgetcsv`** for throwaway scripts and internal tools where you control the file format. It is the right tool when you know the exact column structure, the file is small, and nobody except you will ever trigger the import.

**Build it yourself** when you need total control and have the engineering budget for 500+ lines of import infrastructure. This makes sense for highly custom business logic that no package will cover, or when your import requirements are genuinely unique. Just be realistic about the maintenance cost -- every encoding edge case, every new relationship type, and every UX improvement costs engineering hours over the lifetime of the feature.

**Use Laravel Excel** when you need reliable parsing with Eloquent integration and do not need a user-facing mapping or correction UI. It is particularly strong for scheduled or programmatic imports where a developer controls the file format and there is no end user in the loop.

**Use Filament's Import Action** when you are already in Filament and a basic column mapping modal covers your needs. It gets you 80% of the way with 10% of the effort. The main constraint is that error handling happens after the import finishes, so it works best when your data quality is already reasonably good.

[\#](#when-you-need-all-of-it "Permalink")When you need all of it
-----------------------------------------------------------------

If your application requires smart column mapping, inline validation and correction, relationship linking, queue-powered processing with live progress, and multi-tenancy support, you are looking at building and maintaining a significant piece of infrastructure.

This is the problem Tapix solves. It is a 4-step import wizard for Laravel -- Upload, Map, Review, Execute -- that handles all of the above out of the box. It drops into any Filament panel as a plugin or runs as standalone Livewire components.

The importer definition is a single class with typed fields, guess aliases, validation rules, and relationship configuration. The wizard handles the rest: auto-mapping columns, showing validation errors inline, letting users fix values before import, resolving relationships with configurable match behavior, and processing rows in chunked queue jobs with live progress.

The signal that tells you Tapix is the right fit: your users are uploading files, not your developers. The moment a real end user is in the loop -- with their own spreadsheets, their own column names, their own data quality -- the DIY path gets expensive fast.

Take a look at [tapix.dev](/) or check out the [pricing](/#pricing). Early access pricing is available now -- prices go up at general availability.

 ### Enjoyed this post?

Get notified when we publish new articles about Laravel imports and data handling.

  Email address   Subscribe

Almost there — confirm your subscription via email.

  [ ![Tapix](/img/tapix-logo-light.svg) ![Tapix](/img/tapix-logo-dark.svg) ](https://tapix.dev) [Blog](https://tapix.dev/blog) · [Docs](https://docs.tapix.dev) · [Pricing](https://tapix.dev#pricing) · [Privacy Policy](https://tapix.dev/privacy-policy) · [Terms of Service](https://tapix.dev/terms-of-service) · [Contact](mailto:hello@tapix.dev)

© 2026 Tapix. All rights reserved.
