Packer, Ubuntu Noble, and VirtualBox

Adventures with automating VM builds

I have been using Packer for quite a while. However, all my interactions have used JSON instead of HCL. I wanted to set up a new build using HCL, VirtualBox, and Ubuntu 24.04. I am going to attempt to create documentation for using HCL with VirtualBox to build a custom image based on the latest Ubuntu LTS release (with cloud init).

In my research I found some decent guides that did most of what I wanted. I’ll use these a reference for building my specific use case.

The official docs will also come in handy and can be found here: https://developer.hashicorp.com/packer/integrations/hashicorp/virtualbox.

# virtualbox.pkr.hcl
packer {
  required_version = ">= 1.7.0"
  required_plugins {

    virtualbox = {
        version = "~> 1"
        source  = "github.com/hashicorp/virtualbox"
    }
    ansible = {
      version = ">= 1.1.1"
      source  = "github.com/hashicorp/ansible"
    }
  }
}

source "virtualbox-iso" "packer-vm-ubuntu" {
  guest_os_type = "Ubuntu_64"
  iso_url = "https://releases.ubuntu.com/noble/ubuntu-24.04-live-server-amd64.iso"
  iso_checksum = "sha256:8762f7e74e4d64d72fceb5f70682e6b069932deedb4949c6975d0f0fe0a91be3"
  http_directory = "./http/24.04/"
  ssh_username = "packer"
  ssh_password = "packer"
  ssh_timeout = "10m"
  shutdown_command = "echo 'packer' | sudo -S shutdown -P now"
  headless = false
  firmware = "efi"
  boot_command = ["e<wait><down><down><down><end> autoinstall 'ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/'<F10>"]
  boot_wait    = "5s"
  vboxmanage = [
        [
          "modifyvm",
          "{{.Name}}",
          "--memory",
          "4096"
        ],
        [
          "modifyvm",
          "{{.Name}}",
          "--cpus",
          "4"
        ],
        [
          "modifyvm",
          "{{.Name}}",
          "--nat-localhostreachable1",
          "on"
        ]
      ]

}

# build {
#   sources = ["sources.virtualbox-iso.packer-vm-ubuntu"]
#   
#   provisioner "ansible" {
#     playbook_file = "../../ansible/ubuntu-desktop.yaml"
#   }
# }

I’d like to point out a few things. First the http_directory property specifies a directory to be exposed via HTTP. This will be consumed by cloud-init. We’ll use that to create our initial user.

In the out http_directory we’ll need to create two files.

  • meta-data
  • user-data

We’ll leave meta-data empty, if it is not there, cloud-init will refuse to consume our user-data file.

#cloud-config
autoinstall:
  version: 1
  locale: en_US
  keyboard:
    layout: us
  ssh:
    install-server: true
    allow-pw: true
  packages:
    - zsh
  updates: all
  late-commands:
    - |
      if [ -d /sys/firmware/efi ]; then
        apt-get install -y efibootmgr
        efibootmgr -o $(efibootmgr | perl -n -e '/Boot(.+)\* Ubuntu/ && print $1')
      fi      
  user-data:
    preserve_hostname: false
    hostname: carbon
    package_upgrade: true
    timezone: UTC
    users:
      - name: packer
        # passwd must be a password hash, you can generate it with `openssl passwd -6 replacewithyourpassword`
        passwd: $6$ZfIbBMQd5rmGTGPk$AWrvIL1v4Xq6jsSR72KsSONa2VpSnr8SZHPDF2l6pNNcQ3HKjqWF2JEBYepl4LnnmzKiKFEcRuf7lfyOMooq50
        groups: [adm, cdrom, dip, plugdev, lxd, sudo]
        lock-passwd: false
        sudo: ALL=(ALL) NOPASSWD:ALL
        shell: /bin/zsh

Now we should be able to build our VM using VirtualBox.

packer build virtualbox.pkr.hcl

If you have an ansible playbook, you can reference it in the build section.

SaintCON Training

I’ve been working on a phishing training for SAINTCON. I used this to brainstorm how I wanted the network laid out.

Network Diagram

flowchart TB
wifi-->opnet

subgraph labnet [FakeNet]
    direction TB

    subgraph corpnet [Corp Network]
        subgraph corpnetprod [Production Network]


            smtp[Corporate Web Site]
            www[Corporate Web Site]
            webapp[product]
        end

        subgraph corpnetinternal [Internal Network]

            corpuser[Corp User]
            codehosting[Code Server]

        end

    end


    subgraph wifi [Guest Network]
        operator001("Operator Physical Machines")
    end



    subgraph opnet [Op Network]
        op001("Operations VMs")
    end


    opnet-->corpnetprod;
    corpnetinternal<-->corpnetprod;

end

Work Flows

stateDiagram-v2
    [*] --> Onboard
    Onboard --> OSINT
    OSINT --> InfrastructureDev
    InfrastructureDev --> CampaignDevelopment
    CampaignDevelopment --> Test
    Test --> Phish

    state Onboard {
        [*] --> ConnectNet
        ConnectNet --> AccessVM
        AccessVM --> ReadDocs
        ReadDocs --> [*]
    }

    state OSINT {
        [*] --> SearchEngines
        [*] --> CrunchBase
        [*] --> LinkedIn
        [*] --> CodeHosting
        [*] --> DNSRecon
        [*] --> MailServers
        [*] --> LoginPages
        SearchEngines --> [*]
        CrunchBase --> [*]
        LinkedIn --> [*]
        CodeHosting --> [*]
        DNSRecon --> [*]
        MailServers --> [*]
        LoginPages --> [*]
    }

    state InfrastructureDev {
        SpinUpServices : Spin up Services
        PointDomains : Point Domains
        StaticSite : Static Site

        [*] --> SpinUpServices
        SpinUpServices --> PointDomains
        SpinUpServices --> Modlishka
        SpinUpServices --> Gophish
        SpinUpServices --> StaticSite
        Modlishka --> [*]
        Gophish --> [*]
        StaticSite --> [*]
        PointDomains --> [*]
    }

    state CampaignDevelopment {

        [*] --> EmailTemplates
        [*] --> PayloadCreation
        EmailTemplates --> TestCampaigns
        PayloadCreation --> TestCampaigns
        TestCampaigns --> [*]
    }

    state Test {
        [*] --> SendTestEmail
        SendTestEmail --> TestCredHarvesting
        TestCredHarvesting --> TestPayload
        TestPayload --> [*]
    }

    state Phish {
        [*] --> ScheduleCampaign
        ScheduleCampaign --> WaitForCreds
        ScheduleCampaign --> WaitForCallback
        WaitForCreds --> TakeOverSession
        TakeOverSession --> AuthenticatedPostExploitation
        WaitForCallback --> InternalPostExploitation
        InternalPostExploitation --> [*]
        AuthenticatedPostExploitation --> [*]
    }

Curl Resolve DNS through Proxy

If you append h to your socks5 protocol prefix when using --proxy the DNS resolution happens on the other side of the socks proxy!

curl --proxy socks5h://127.0.0.1:1080 http://internal-host

Cloud metadata URLs

Alibaba

http://100.100.100.200/latest/meta-data/
http://100.100.100.200/latest/meta-data/instance-id
http://100.100.100.200/latest/meta-data/image-id

References


AWS

http://169.254.169.254/latest/user-data
http://169.254.169.254/latest/user-data/iam/security-credentials/[ROLE NAME]
http://169.254.169.254/latest/meta-data/iam/security-credentials/
http://169.254.169.254/latest/meta-data/iam/security-credentials/[ROLE NAME]
http://169.254.169.254/latest/meta-data/ami-id
http://169.254.169.254/latest/meta-data/reservation-id
http://169.254.169.254/latest/meta-data/hostname
http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key
http://169.254.169.254/latest/meta-data/public-keys/[ID]/openssh-key
http://169.254.169.254/
http://169.254.169.254/latest/meta-data/
http://169.254.169.254/latest/meta-data/public-keys/

ECS Task

http://169.254.170.2/v2/credentials/

References


Azure

No header Required

http://169.254.169.254/metadata/v1/maintenance

Requires header

Must use Metadata: true request header

http://169.254.169.254/metadata/instance?api-version=2017-04-02
http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-04-02&format=text

References


Google Cloud

Requires header

Must use one of the following headers

  • Metadata-Flavor: Google
  • X-Google-Metadata-Request: True
http://169.254.169.254/computeMetadata/v1/
http://metadata.google.internal/computeMetadata/v1/
http://metadata/computeMetadata/v1/
http://metadata.google.internal/computeMetadata/v1/instance/hostname
http://metadata.google.internal/computeMetadata/v1/instance/id
http://metadata.google.internal/computeMetadata/v1/project/project-id
http://metadata.google.internal/computeMetadata/v1/instance/disks/?recursive=true

No header required (old)

http://metadata.google.internal/computeMetadata/v1beta1/

References


Digital Ocean

http://169.254.169.254/metadata/v1.json
http://169.254.169.254/metadata/v1/
http://169.254.169.254/metadata/v1/id
http://169.254.169.254/metadata/v1/user-data
http://169.254.169.254/metadata/v1/hostname
http://169.254.169.254/metadata/v1/region
http://169.254.169.254/metadata/v1/interfaces/public/0/ipv6/address

References


HP Helion

http://169.254.169.254/2009-04-04/meta-data/

Kubernetes

https://kubernetes.default
https://kubernetes.default.svc.cluster.local
https://kubernetes.default.svc/metrics

References


OpenStack/RackSpace

http://169.254.169.254/openstack

References


Oracle Cloud

http://192.0.0.192/latest/
http://192.0.0.192/latest/user-data/
http://192.0.0.192/latest/meta-data/
http://192.0.0.192/latest/attributes/

References


Packetcloud

https://metadata.packet.net/userdata

PfSense and SELKS

I installed SELKS this in a VM. I am using Fedora Server (which I kind of regret because of the updates).

Once installed I went to my PfSense firewall admin interface, to bridge LAN and WAN to a 3rd interface ( OPT1). ref

                   WAN
                   +
                   |
                   |
    +--------------v----------------+
    |                               |
    |                               |
    |           PfSense             |
    |                               |
    |                               |
    |                               |
    +---+--------------------+------+
        |                    |
        |                    |
        |                    |
        v                    v
       LAN                  OPT1
                   (to SELKS Monitor port)

PfSense logs in SELKS kibana

I used some files from here, then enabled log forwarding in pfsense

Anyproxy Intercept

Anyproxy is an intercept proxy. I used it to inject scripts into pages to assist in web fuzzing.

const AnyProxy = require('./anyproxy/proxy');
const options = {
    port: 8080,
    rule: require('./dfkt_rule'),
    webInterface: {
        enable: true,
        webPort: 8002
    },
    throttle: 10000,
    forceProxyHttps: true,
    wsIntercept: true,
    silent: false
};
const proxyServer = new AnyProxy.ProxyServer(options);

proxyServer.on('ready', () => {
    console.log('ready')
});
proxyServer.on('error', (e) => {
    console.error(e)
});
proxyServer.start();

//when finished
// dfkt_rule
let hooks = {
    beforeSendRequest: [
        function (requestDetail, requestDetailModifications) {
            requestDetailModifications.requestOptions = requestDetail.requestOptions;
            requestDetailModifications.requestOptions.headers['User-Agent'] += ' DFKT/1';
        },
    ],

    beforeSendResponse: [
        function (requestDetail, responseDetail, modifiedResponse) {

            modifiedResponse.response = responseDetail.response;

            console.log(modifiedResponse.response.header);

            if (modifiedResponse.response.body.indexOf('<head>') !== -1) {
                modifiedResponse.response.body = modifiedResponse.response.body.toString().replace('<head>', '<head><script>console.log("dfkt loaded")</script>');
            }
        },
    ],
}


module.exports = {

    summary: 'DFKT rules for web testing',

    * beforeSendRequest(requestDetail) {
        let requestDetailModifications = {};
        for (let hook in hooks.beforeSendRequest) {
            hooks.beforeSendRequest[hook](requestDetail, requestDetailModifications);
        }

        return requestDetailModifications;
    },


    // deal response before send to client
    * beforeSendResponse(requestDetail, responseDetail) {
        let responseDetailModifications = {};
        for (let hook in hooks.beforeSendResponse) {
            hooks.beforeSendResponse[hook](requestDetail, responseDetail, responseDetailModifications);
        }

        return responseDetailModifications;
    },

    // // if deal https request
    // *beforeDealHttpsRequest(requestDetail) {
    //
    // },
    // error happened when dealing requests
    * onError(requestDetail, error) {

    },
    // error happened when connect to https server
    * onConnectError(requestDetail, error) {

    }
};

Random shell scripting things I may use in the future

Mass move:

for f in wlog/*; do
  for ff in $f/*; do
    cp "$ff"  $(basename $f)-$(basename $ff | sed 's/^00-//g' | sed 's/ /-/g');
  done;
done

Mass Find and replace:

for f in *todo*; do
  cat $f | sed -e 's/## '$(basename $f | sed 's/-stand-up-notes.md//')$'/---\\\ndate: "2019-03-01T16:20:01"
title: '$(basename $f | sed 's/-stand-up-notes.md//')$' stand up notes\\\n---'/  | tee $f ;
done

Mass adjust markdown headers:

find . -name '*.md' | while read f; do cat $f | egrep '^##\s' > /dev/null && echo $f; done | while read fn; do cat $fn | sed 's/^##/###/g' | sed 's/^#\s/## /g' | tee $fn; done

Create temp directory:

tmpDir="$(mktemp -d -t tmpdirname.XXXXXXXXXX || oops "Can't create temporary directory")"
cleanup() {
    rm -rf "$tmpDir"
}
trap cleanup EXIT INT QUIT TERM