This does not contain the actual lib updates for 3.6 yet.
This is a fork of the Oss-Attribution-Builder created by Amazon. https://github.com/amzn/oss-attribution-builder/
The tool there is in read-only state and doesnt work out of the box anymore.
This repository has fixed the problems that caused it to fail.
The main thing this repo enables is documentation on how to specify where the database will be that the tool uses to store the license-data.
The main benefit of having this be outside of a docker-container is that you can do backups on it more easily. Note that the database-dir will contain the binary database-files of the Postgres-server that the tool uses. It is not an ASCII-based sql-dump; though these can be made too.
docker-compose.yml contains a location for the database-directory, set to
volumes: - ./utils:/utils # Contains utility-scripts for making SQL-backups ; do not remove/uncomment # Of the volumes below, choose ONE of these to uncomment #- ./docs/schema.sql:/docker-entrypoint-initdb.d/init.sql # Uncomment this to start with EMPTY database - ./pgdata/sql-dump.sql:/docker-entrypoint-initdb.d/init.sql # Uncomment this to import sql-dump.sql into the database # Leave this entry. - ./pgdata:/var/lib/postgresql/data # Set the location you want the postgresql data to go
Note in the contents above that there's an 'init.sql' that is populated either by 'docs/schema.sql' or a file 'pgdata/sql-dump.sql'.
See the significance of these in the next section.
There are three ways to use this project:
- From scratch, with a new database
- Starting with a TEXT-dump of a previous install.
- Starting with a directory of files from a previous install
Whatever you uncomment, you can leave things as they are after the first time you have run
To re-run the database-creation/restore , you will need to remove the database-directory (
sudo rm -rf pgdata/pgdata) and delete the container
docker container rm oss-attribution-builder_database_1)
Starting from scratch, with a new database
For this, uncomment the line
- ./docs/schema.sql:/docker-entrypoint-initdb.d/init.sql # Uncomment this to start with EMPTY database
Make sure to have the other line regarding init.sql commented out.
docker-compose up and navigate to localhost:8000
Starting with a TEXT-dump of a previous install
For this, uncomment the line:
- ./pgdata/sql-dump.sql:/docker-entrypoint-initdb.d/init.sql # Uncomment this to import sql-dump.sql into the database # Leave this entry.
Make sure to have the other line regarding int.sql commented out.
Place a database-dump in
pgdata/sql-dump.sql and then run
This will restore a database from that file and start the application.
Starting with a directory of files from a previous install
If you've saved the (binary) postgres database-files of a previous install, you can restore these in the directory pgdata/pgdata and simply run
docker-compose up. When it finds a usable database there, it will not try to run any database-initialization-scripts.
Note that the database-files need to be Postgresql13-based. anything else will not work.
Making a database dump:
Run the command
This will make a TEXT-based database-dump in
pgdata/dumps with a name including a date/time-stamp.
These can be used for restoring into a new install (see above)
Original documentation below
OSS Attribution Builder
OSS Attribution Builder is a website that helps teams create attribution documents for software products. An attribution document is a text file, web page, or screen in just about every software application that lists the software components used and their licenses. They're often in the About screens, and are sometimes labeled "Open Source Notices", "Credits", or other similar jargon.
- Install Docker
- Clone this repository
- Visit http://localhost:8000/
- The demo uses HTTP basic auth. Enter any username and password. Use
adminto test out admin functionality.
- The demo uses HTTP basic auth. Enter any username and password. Use
Using the Website
The attribution builder was originally an Amazon-internal tool. Some portions had to be removed to make this a sensible open source project. As such, there are some warts:
- Projects have contact lists, but at the moment the UI only supports one contact (the legal contact).
These will all be fixed in time, but be aware that some things might be weird for a while.
If you're ready to integrate the attribution builder into your own environment, there are some things to set up:
Open up config/default.js and poke around. This configuration launches when you run
docker-compose or otherwise launch the application.
The attribution builder has support for two types of license definitions:
- SPDX identifiers
- "Known" license texts and tags
SPDX identifiers are just used for pre-filling the license selector, but do not (currently) have texts. The more useful type of license is a "known" license, where you (the administrator) supply the text of the license and any tags you'd like to apply.
For information on adding your own "known" licenses, see the license README. There are two existing licenses in the same directory you can look at for examples.
Tags allow you to add arbitrary validation rules to a license. They can be useful for:
- Verifying a license is being used in the right way (e.g., LGPL and how a package was linked)
- Annotating a particular license as needing follow up, if your business has special processes
- Providing guidance on attribution for licenses with many variants
- Modifying the how a license is displayed in an attribution document
For information on what tags can do and how to create your own, see the tags README.
The attribution builder offers some form of extensions that allow you to alter client-side site behavior and appearance, without needing to patch internals. This can make upgrades easier.
See the extensions README for details.
The attribution builder supports being able to restrict access to certain people or groups using project ACLs. These can also be used for administration and to "verify" packages (details on that in a later section). The default implementation
nullauth is not very useful for most environments; you will want to write your own when launching more broadly.
See the base auth interface for implementation details.
To start up the server, you should run
build/server/localserver.js after building with
npm run build. There are some environment variables you'll probably want to set when running:
NODE_ENVshould most likely be set to
CONFIG_NAMEshould be set to the basename (no extension) of your configuration file you created above. The default is "default".
The server runs in HTTP only. You probably want to put a thin HTTPS web server or proxy in front of it.
See CONTRIBUTING for information.
npm install and then
npm run dev will get you off the ground for local development. This will start a Docker container for PostgreSQL, but will use a local copy of tsc, webpack, node, etc so you can iterate quickly.
Once things have started up, you can open http://0.0.0.0:2425/webpack-dev-server/. This will automatically reload on browser changes, and the backend will also automatically restart on server-side changes.
Handy environment variables:
NODE_ENV: when unset or
development, you'll get full source maps & debug logs
DEBUG_SQL: when set (to anything), this will show SQL queries on the terminal as they execute
npm test will run unit tests. These are primarily server focused.
npm run test-ui will run Selenium tests. You can set the environment variable
SELENIUM_DRIVER if you want a custom driver -- by default, it'll try to use Chrome, and if that's not available it'll fall back to PhantomJS.
When debugging UI tests, it may be easier to change
docker-compose.selenium.yml, and then connect to the container via VNC (port 5900, password "secret"). Run the container and your tests separately:
docker-compose -f docker-compose.selenium.yml up --build
tsc && jasmine --stop-on-failure=true 'build/selenium/*.spec.js'
Tests failing for seemingly no reason?
driver.sleep not working? Make sure your Jasmine timeout on your test is high enough.