This is the multi-page printable view of this section. Click here to print.
Random
Publish Flutter App to Google Play Using GitHub Actions
I have been wanting to do more mobile app development and streamline the release process for a while now. I read a few things on how to publish to the Play Store with GitHub actions, but I kept running into issues. I’ll try to document what the issues were and how I was able to get past them.
I am using https://github.com/r0adkll/upload-google-play to publish actions. To make testing easier, I used https://github.com/nektos/act to run my GitHub actions locally. I used the largest docker image, since the other ones didn’t seem to have what I needed for Android builds.
First Issue Error: Unknown error occurred.
The first issue I ran into was: Error: Unknown error occurred.. There a few issues filed about this error. The consensus is to check that your are referencing your secrets with the appropriate variable names and config options.
I checked out the https://github.com/r0adkll/upload-google-play repo and manually invoked lib/index.js to quickly test my configs worked.
git clone https://github.com/r0adkll/upload-google-play.git
I had to dig into how actions are executed and how the arguments are passed to the action. It turns out, they all become environment variables prefixed with INPUT_. My invocation looked something like this:
INPUT_PACKAGENAME=example.package.name \
INPUT_TRACK=production \
INPUT_SERVICEACCOUNTJSONPLAINTEXT=$(cat ~/Downloads/google-cloud-credentials.json | jq -c) \
INPUT_STATUS=completed \
INPUT_RELEASEFILES=$HOME/flutter-project-dir/build/app/outputs/flutter-apk/app-release.apk \
INPUT_DEBUG=1 \
node lib/index.js
This helped me confirm/determine my secrets were being referenced incorrectly.
Second Issue You cannot rollout this release because it does not allow any existing users to upgrade to the newly added APKs.
This one didn’t make sense, so I switched to doing a draft release. So I could inspect the release from the Play Console. Turns out that my efforts to support WearOS made it so I could not support Android phones. This meant none of the android phones using my app would not be able to upgrade to the new version. So I ended up reverting my app support for WearOS. I’ll have to figure out how to support watches and phones later.
Adventures in Python
I try to like python. I really do. I attempt to use it and be like all the other cool people. Sadly, I am a dumb python noob. I see an import like:
from Crypto.Cipher import AES
Cool, lets install the crypto pacakge…
pip install crypto
It returns command not found: pip, fair enough. I am not a python dev, my environment isn’t setup. Lets run the module.
python3 -m pip install crypto
More errors:
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try brew install
xyz, where xyz is the package you are trying to
install.
If you wish to install a Python library that isn't in Homebrew,
use a virtual environment:
python3 -m venv path/to/venv
source path/to/venv/bin/activate
python3 -m pip install xyz
If you wish to install a Python application that isn't in Homebrew,
it may be easiest to use 'pipx install xyz', which will manage a
virtual environment for you. You can install pipx with
brew install pipx
You may restore the old behavior of pip by passing
the '--break-system-packages' flag to pip, or by adding
'break-system-packages = true' to your pip.conf file. The latter
will permanently disable this error.
If you disable this error, we STRONGLY recommend that you additionally
pass the '--user' flag to pip, or set 'user = true' in your pip.conf
file. Failure to do this can result in a broken Homebrew installation.
Read more about this behavior here: <https://peps.python.org/pep-0668/>
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
Ok, I guess venv’s are no longer a suggestion and instead a way of life.
mkdir -p ~/python-sucks/venv-for-crypto
python3 -m venv ~/python-sucks/venv-for-crypto
source ~/python-sucks/venv-for-crypto/bin/activate
hey it works!
Collecting crypto
Using cached crypto-1.4.1-py2.py3-none-any.whl.metadata (3.4 kB)
Collecting Naked (from crypto)
Using cached Naked-0.1.32-py2.py3-none-any.whl.metadata (931 bytes)
Collecting shellescape (from crypto)
Using cached shellescape-3.8.1-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting requests (from Naked->crypto)
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting pyyaml (from Naked->crypto)
Using cached PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl.metadata (2.1 kB)
Collecting charset-normalizer<4,>=2 (from requests->Naked->crypto)
Using cached charset_normalizer-3.4.1-cp313-cp313-macosx_10_13_universal2.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests->Naked->crypto)
Using cached idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting urllib3<3,>=1.21.1 (from requests->Naked->crypto)
Using cached urllib3-2.3.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests->Naked->crypto)
Using cached certifi-2024.12.14-py3-none-any.whl.metadata (2.3 kB)
Using cached crypto-1.4.1-py2.py3-none-any.whl (18 kB)
Using cached Naked-0.1.32-py2.py3-none-any.whl (587 kB)
Using cached shellescape-3.8.1-py2.py3-none-any.whl (3.1 kB)
Using cached PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl (171 kB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached certifi-2024.12.14-py3-none-any.whl (164 kB)
Using cached charset_normalizer-3.4.1-cp313-cp313-macosx_10_13_universal2.whl (195 kB)
Using cached idna-3.10-py3-none-any.whl (70 kB)
Using cached urllib3-2.3.0-py3-none-any.whl (128 kB)
Installing collected packages: shellescape, urllib3, pyyaml, idna, charset-normalizer, certifi, requests, Naked, crypto
Successfully installed Naked-0.1.32 certifi-2024.12.14 charset-normalizer-3.4.1 crypto-1.4.1 idna-3.10 pyyaml-6.0.2 requests-2.32.3 shellescape-3.8.1 urllib3-2.3.0
Lets have some fun and start writing python.
from Crypto.Cipher import AES
Sadness…
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
from Crypto.Cipher import AES
ModuleNotFoundError: No module named 'Crypto'
cant find the Crypto module.
oh hey. I am on a mac, I guess i need some special hacks. I seen this in some random code.
import sys
import crypto
sys.modules["Crypto"] = crypto
from Crypto.Cipher import AES
More Sadness…
Traceback (most recent call last):
File "<python-input-5>", line 1, in <module>
from Crypto.Cipher import AES
ModuleNotFoundError: No module named 'Crypto.Cipher'
oh, this is for the Crypto.Cipher. Lets look at our package.
ls ~/python-sucks/venv-for-crypto/lib/python3.13/site-packages/crypto
hrmm. no cipher folder.
__init__.py __pycache__ app.py decryptoapp.py library settings.py
oh hey. I think I need pycrypto instead of crypto.
python3 -m pip install pycrypto
Lets see if it works
import sys
import crypto
sys.modules["Crypto"] = crypto
from Crypto.Cipher import AES
yay! it does. it feels wrong. what did this just do? it installed pycrypto package in the existing crypto folder? yes it did. hooray for case insensitive file systems combined with removing the py prefix on the installed module.
Lets create a new venv and install the right module.
mkdir -p ~/python-sucks/venv-for-crypto-2
python3 -m venv ~/python-sucks/venv-for-crypto-2
source ~/python-sucks/venv-for-crypto-2/bin/activate
python3 -m pip install pycrypto
and finally try just the initial line that started this adventure.
from Crypto.Cipher import AES
Yes! It works the way it should.
Packer, Ubuntu Noble, and VirtualBox
I have been using Packer for quite a while. However, all my interactions have used JSON instead of HCL. I wanted to set up a new build using HCL, VirtualBox, and Ubuntu 24.04. I am going to attempt to create documentation for using HCL with VirtualBox to build a custom image based on the latest Ubuntu LTS release (with cloud init).
In my research I found some decent guides that did most of what I wanted. I’ll use these a reference for building my specific use case.
- QEMU: https://github.com/shantanoo-desai/packer-ubuntu-server-uefi/tree/main
- VMWare: https://github.com/ynlamy/packer-ubuntuserver24_04/blob/main/vmware-iso-ubuntuserver24_04.pkr.hcl
The official docs will also come in handy and can be found here: https://developer.hashicorp.com/packer/integrations/hashicorp/virtualbox.
# virtualbox.pkr.hcl
packer {
required_version = ">= 1.7.0"
required_plugins {
virtualbox = {
version = "~> 1"
source = "github.com/hashicorp/virtualbox"
}
ansible = {
version = ">= 1.1.1"
source = "github.com/hashicorp/ansible"
}
}
}
source "virtualbox-iso" "packer-vm-ubuntu" {
guest_os_type = "Ubuntu_64"
iso_url = "https://releases.ubuntu.com/noble/ubuntu-24.04-live-server-amd64.iso"
iso_checksum = "sha256:8762f7e74e4d64d72fceb5f70682e6b069932deedb4949c6975d0f0fe0a91be3"
http_directory = "./http/24.04/"
ssh_username = "packer"
ssh_password = "packer"
ssh_timeout = "10m"
shutdown_command = "echo 'packer' | sudo -S shutdown -P now"
headless = false
firmware = "efi"
boot_command = ["e<wait><down><down><down><end> autoinstall 'ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/'<F10>"]
boot_wait = "5s"
vboxmanage = [
[
"modifyvm",
"{{.Name}}",
"--memory",
"4096"
],
[
"modifyvm",
"{{.Name}}",
"--cpus",
"4"
],
[
"modifyvm",
"{{.Name}}",
"--nat-localhostreachable1",
"on"
]
]
}
# build {
# sources = ["sources.virtualbox-iso.packer-vm-ubuntu"]
#
# provisioner "ansible" {
# playbook_file = "../../ansible/ubuntu-desktop.yaml"
# }
# }
I’d like to point out a few things. First the http_directory property
specifies a directory to be exposed via HTTP. This will be consumed by
cloud-init. We’ll use that to create our initial user.
In the out http_directory we’ll need to create two files.
- meta-data
- user-data
We’ll leave meta-data empty, if it is not there, cloud-init will refuse to
consume our user-data file.
#cloud-config
autoinstall:
version: 1
locale: en_US
keyboard:
layout: us
ssh:
install-server: true
allow-pw: true
packages:
- zsh
updates: all
late-commands:
- |
if [ -d /sys/firmware/efi ]; then
apt-get install -y efibootmgr
efibootmgr -o $(efibootmgr | perl -n -e '/Boot(.+)\* Ubuntu/ && print $1')
fi
user-data:
preserve_hostname: false
hostname: carbon
package_upgrade: true
timezone: UTC
users:
- name: packer
# passwd must be a password hash, you can generate it with `openssl passwd -6 replacewithyourpassword`
passwd: $6$ZfIbBMQd5rmGTGPk$AWrvIL1v4Xq6jsSR72KsSONa2VpSnr8SZHPDF2l6pNNcQ3HKjqWF2JEBYepl4LnnmzKiKFEcRuf7lfyOMooq50
groups: [adm, cdrom, dip, plugdev, lxd, sudo]
lock-passwd: false
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/zsh
Now we should be able to build our VM using VirtualBox.
packer build virtualbox.pkr.hcl
If you have an ansible playbook, you can reference it in the build section.
SaintCON Training
I’ve been working on a phishing training for SAINTCON. I used this to brainstorm how I wanted the network laid out.
Network Diagram
flowchart TB
wifi-->opnet
subgraph labnet [FakeNet]
direction TB
subgraph corpnet [Corp Network]
subgraph corpnetprod [Production Network]
smtp[Corporate Web Site]
www[Corporate Web Site]
webapp[product]
end
subgraph corpnetinternal [Internal Network]
corpuser[Corp User]
codehosting[Code Server]
end
end
subgraph wifi [Guest Network]
operator001("Operator Physical Machines")
end
subgraph opnet [Op Network]
op001("Operations VMs")
end
opnet-->corpnetprod;
corpnetinternal<-->corpnetprod;
endWork Flows
stateDiagram-v2
[*] --> Onboard
Onboard --> OSINT
OSINT --> InfrastructureDev
InfrastructureDev --> CampaignDevelopment
CampaignDevelopment --> Test
Test --> Phish
state Onboard {
[*] --> ConnectNet
ConnectNet --> AccessVM
AccessVM --> ReadDocs
ReadDocs --> [*]
}
state OSINT {
[*] --> SearchEngines
[*] --> CrunchBase
[*] --> LinkedIn
[*] --> CodeHosting
[*] --> DNSRecon
[*] --> MailServers
[*] --> LoginPages
SearchEngines --> [*]
CrunchBase --> [*]
LinkedIn --> [*]
CodeHosting --> [*]
DNSRecon --> [*]
MailServers --> [*]
LoginPages --> [*]
}
state InfrastructureDev {
SpinUpServices : Spin up Services
PointDomains : Point Domains
StaticSite : Static Site
[*] --> SpinUpServices
SpinUpServices --> PointDomains
SpinUpServices --> Modlishka
SpinUpServices --> Gophish
SpinUpServices --> StaticSite
Modlishka --> [*]
Gophish --> [*]
StaticSite --> [*]
PointDomains --> [*]
}
state CampaignDevelopment {
[*] --> EmailTemplates
[*] --> PayloadCreation
EmailTemplates --> TestCampaigns
PayloadCreation --> TestCampaigns
TestCampaigns --> [*]
}
state Test {
[*] --> SendTestEmail
SendTestEmail --> TestCredHarvesting
TestCredHarvesting --> TestPayload
TestPayload --> [*]
}
state Phish {
[*] --> ScheduleCampaign
ScheduleCampaign --> WaitForCreds
ScheduleCampaign --> WaitForCallback
WaitForCreds --> TakeOverSession
TakeOverSession --> AuthenticatedPostExploitation
WaitForCallback --> InternalPostExploitation
InternalPostExploitation --> [*]
AuthenticatedPostExploitation --> [*]
}Curl Resolve DNS through Proxy
If you append h to your socks5 protocol prefix when using --proxy the DNS resolution happens on the other side of the socks proxy!
curl --proxy socks5h://127.0.0.1:1080 http://internal-host
Cloud metadata URLs
Alibaba
http://100.100.100.200/latest/meta-data/
http://100.100.100.200/latest/meta-data/instance-id
http://100.100.100.200/latest/meta-data/image-id
References
AWS
http://169.254.169.254/latest/user-data
http://169.254.169.254/latest/user-data/iam/security-credentials/[ROLE NAME]
http://169.254.169.254/latest/meta-data/iam/security-credentials/
http://169.254.169.254/latest/meta-data/iam/security-credentials/[ROLE NAME]
http://169.254.169.254/latest/meta-data/ami-id
http://169.254.169.254/latest/meta-data/reservation-id
http://169.254.169.254/latest/meta-data/hostname
http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key
http://169.254.169.254/latest/meta-data/public-keys/[ID]/openssh-key
http://169.254.169.254/
http://169.254.169.254/latest/meta-data/
http://169.254.169.254/latest/meta-data/public-keys/
ECS Task
http://169.254.170.2/v2/credentials/
References
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-categories
- https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v2.html
Azure
No header Required
http://169.254.169.254/metadata/v1/maintenance
Requires header
Must use Metadata: true request header
http://169.254.169.254/metadata/instance?api-version=2017-04-02
http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-04-02&format=text
References
- https://azure.microsoft.com/en-us/blog/what-just-happened-to-my-vm-in-vm-metadata-service/
- https://docs.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service
Google Cloud
Requires header
Must use one of the following headers
Metadata-Flavor: GoogleX-Google-Metadata-Request: True
http://169.254.169.254/computeMetadata/v1/
http://metadata.google.internal/computeMetadata/v1/
http://metadata/computeMetadata/v1/
http://metadata.google.internal/computeMetadata/v1/instance/hostname
http://metadata.google.internal/computeMetadata/v1/instance/id
http://metadata.google.internal/computeMetadata/v1/project/project-id
http://metadata.google.internal/computeMetadata/v1/instance/disks/?recursive=true
No header required (old)
http://metadata.google.internal/computeMetadata/v1beta1/
References
Digital Ocean
http://169.254.169.254/metadata/v1.json
http://169.254.169.254/metadata/v1/
http://169.254.169.254/metadata/v1/id
http://169.254.169.254/metadata/v1/user-data
http://169.254.169.254/metadata/v1/hostname
http://169.254.169.254/metadata/v1/region
http://169.254.169.254/metadata/v1/interfaces/public/0/ipv6/address
References
HP Helion
http://169.254.169.254/2009-04-04/meta-data/
Kubernetes
https://kubernetes.default
https://kubernetes.default.svc.cluster.local
https://kubernetes.default.svc/metrics
References
- https://twitter.com/Random_Robbie/status/1072242182306832384
- https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
OpenStack/RackSpace
http://169.254.169.254/openstack
References
Oracle Cloud
http://192.0.0.192/latest/
http://192.0.0.192/latest/user-data/
http://192.0.0.192/latest/meta-data/
http://192.0.0.192/latest/attributes/
References
Packetcloud
https://metadata.packet.net/userdata
PfSense and SELKS
I installed SELKS this in a VM. I am using Fedora Server (which I kind of regret because of the updates).
Once installed I went to my PfSense firewall admin interface, to bridge LAN and WAN to a 3rd interface ( OPT1). ref
WAN
+
|
|
+--------------v----------------+
| |
| |
| PfSense |
| |
| |
| |
+---+--------------------+------+
| |
| |
| |
v v
LAN OPT1
(to SELKS Monitor port)
PfSense logs in SELKS kibana
I used some files from here, then enabled log forwarding in pfsense
Anyproxy Intercept
Anyproxy is an intercept proxy. I used it to inject scripts into pages to assist in web fuzzing.
const AnyProxy = require('./anyproxy/proxy');
const options = {
port: 8080,
rule: require('./dfkt_rule'),
webInterface: {
enable: true,
webPort: 8002
},
throttle: 10000,
forceProxyHttps: true,
wsIntercept: true,
silent: false
};
const proxyServer = new AnyProxy.ProxyServer(options);
proxyServer.on('ready', () => {
console.log('ready')
});
proxyServer.on('error', (e) => {
console.error(e)
});
proxyServer.start();
//when finished
// dfkt_rule
let hooks = {
beforeSendRequest: [
function (requestDetail, requestDetailModifications) {
requestDetailModifications.requestOptions = requestDetail.requestOptions;
requestDetailModifications.requestOptions.headers['User-Agent'] += ' DFKT/1';
},
],
beforeSendResponse: [
function (requestDetail, responseDetail, modifiedResponse) {
modifiedResponse.response = responseDetail.response;
console.log(modifiedResponse.response.header);
if (modifiedResponse.response.body.indexOf('<head>') !== -1) {
modifiedResponse.response.body = modifiedResponse.response.body.toString().replace('<head>', '<head><script>console.log("dfkt loaded")</script>');
}
},
],
}
module.exports = {
summary: 'DFKT rules for web testing',
* beforeSendRequest(requestDetail) {
let requestDetailModifications = {};
for (let hook in hooks.beforeSendRequest) {
hooks.beforeSendRequest[hook](requestDetail, requestDetailModifications);
}
return requestDetailModifications;
},
// deal response before send to client
* beforeSendResponse(requestDetail, responseDetail) {
let responseDetailModifications = {};
for (let hook in hooks.beforeSendResponse) {
hooks.beforeSendResponse[hook](requestDetail, responseDetail, responseDetailModifications);
}
return responseDetailModifications;
},
// // if deal https request
// *beforeDealHttpsRequest(requestDetail) {
//
// },
// error happened when dealing requests
* onError(requestDetail, error) {
},
// error happened when connect to https server
* onConnectError(requestDetail, error) {
}
};
Random shell scripting things I may use in the future
Mass move:
for f in wlog/*; do
for ff in $f/*; do
cp "$ff" $(basename $f)-$(basename $ff | sed 's/^00-//g' | sed 's/ /-/g');
done;
done
Mass Find and replace:
for f in *todo*; do
cat $f | sed -e 's/## '$(basename $f | sed 's/-stand-up-notes.md//')$'/---\\\ndate: "2019-03-01T16:20:01"
title: '$(basename $f | sed 's/-stand-up-notes.md//')$' stand up notes\\\n---'/ | tee $f ;
done
Mass adjust markdown headers:
find . -name '*.md' | while read f; do cat $f | egrep '^##\s' > /dev/null && echo $f; done | while read fn; do cat $fn | sed 's/^##/###/g' | sed 's/^#\s/## /g' | tee $fn; done
Create temp directory:
tmpDir="$(mktemp -d -t tmpdirname.XXXXXXXXXX || oops "Can't create temporary directory")"
cleanup() {
rm -rf "$tmpDir"
}
trap cleanup EXIT INT QUIT TERM