Talk:Complete overhaul of PCGen from Java

From PCGen Wiki
Jump to: navigation, search

Comments by Rizzen


Base

Steps to PCGen current process -

  • Load PCGen
  • This causes it to look at the GameSystem folder and validate/add those to an available Load list
  • UI starts and user can select Game System and then associated books with that system
  • Upon Load, data is validated and stored in memory
  • User can load existing PC, or create a new PC

Problem points:

  • All Data is loaded and stored in existing memory - uses available RAM allocated by Java
  • EXE version doesn't do well with memory for reloads

Solutions:

  • Possibly use SQL DB.

JEP - Obsolete System

  • We use JEP to handle vars

Problem points:

  • JEP is a grandfathered library
  • Current implementation treats all vars as PC Global
  • Different systems would need the ability to have vars on equipment, pc, and various other objects

Solutions:

  • Could scripting language replace this system?


Homebrew Support:

  • PCGen uses flat text files called LST to facilitate books - each lst file is associated with different sections of a book or objects - i.e. Races, Classes, Kits, Alignment, Abilities aka Stats, Features aka Abilities, Sizes, proficiencies, equipment, equipment modifications, engine specific (data, tables, game system rules, house rules, etc)
  • OGL requires Human Readable data (Though it can be interpreted or translated into something else during use)

Problem Points:

  • LST Files, although not hard to edit, do not come naturally to most users. The end goal is to include a smart editor to add in custom content.


User Interface

  • Use XML files to construct UI for the various game system, and extension source books have additional XML to add additional tabs and fields. This way prevent UI from being hardcoded. Engine does not need to know about UI until game system is selected.


Languages for engines

Possible options:

  • C#, #GTK, CS-Script:
    • Mono CLR (available on Windows, Linux and Mac OS X) and Mono is supported by Microsoft, including support for Mono natively in CS-Script.
    • Seen as memory safe, with built-in memory management in the CLR run-time.
    • Using only GTK# for GUI may result in a single build for all platforms, but native look & feel only possible on Gnome 2.0 desktop.
    • There is a built-in ODBC connector in C#, though there is a ADO C# MySQL connector that supports MySQL and MariaDB.
    • CS-Script is a hybrid between C# and ECMA Script (Javascript) with strong variable types.
  • C++, wxWidgets, ChaiScript, CMake:
    • wxWidgets supports Windows, Linux and Mac OS X using native look & feel for each platform.
    • Remember to do proper memory management.
    • There is C library for connecting directly to MariaDB, and C library for MySQL that can be used for both DB types.
    • ChaiScript is a scripting language especially made for C++.
    • Cross platform build system, and is natively support in Visual Studio 2017, and QtCreator.

C# has been touted as a safer language to pursue. It has compilers for any system, although C++ has iOS support. There are examples of CMake being used for C# projects, considering Mono and MonoDevelop themselves are built using CMake.


Darin's Suggestions

Using an SQL DB for has some advantages over plain text. However, it has some disadvantages as well.

  • DBs don't fit into git very well. No need to store DB in GIT, create DB on first run of PCGen and populate with chosen sources data files.
  • DBs' PRs don't fit into git very well (user pull requests).
  • Plain text is easier to diff. Can do the diff on the DB input files (JSON, YAML, or XML)
  • DBs encourage you to read only the data you need, when you need it, and to discard data you no longer require, keeping the memory usage down. Depends how you use DB results, discard or keep in memory, I think both would happen. Just needed part of source remains in memory, while the huge bulk remains in DB.
    • This can be emulated with wrappers around the text file reader which only return requested data.
      • This would likely require some level of indexing (like a DB would use).

I would suggest having text files in git, and building the DB from there if you must use a DB. You may want to look at SQLite as your DB of choice because it provides you with most of what you need without adding dependencies on the user (how many Windows users have MySQL or MariaDB installed?). Add other-DB support later if there is any benefit. SQLite is a good choice too, considering use of DB would primary be reading, with character writing into DB.

Caveats to using a DB:

  • You may be tempted to store characters in the DB. This is fine. However, an export method (back to plain text that can be imported to a new db) would be required. And the import for this would need to be able to both overwrite and coexist with existing characters - I may want my player to send me his/her character, but I don't necessarily want to lose the old version. Deleting the character from the DB, and all its corresponding entries (equipment, etc.) would also be required. Unless the character only lives in separate files as it does today. That could be supported, when writing the parser to import data in DB, the easier serializer is created at same time (and would also be used as unit testing for parser).
  • Upgrading a db with new versions of, say, BahamutDragon's data set, would have to be able to manage adding, modifying, and deleting old information. Most likely, this means that every piece of information has to be tagged with its source somehow, and then delete everything with those tags prior to inserting the current data. And all this has to be in a single transaction if there are foreign keys (e.g., a character referring to equipment or spells or what have you, enforced by the DB) because otherwise the delete will fail. The transaction log could get very large. Or you go without db-enforced foreign keys, which could leave surprises in dangling character fields (pointing at spells or equipment that don't exist anymore). This means that validation would be done by pcgen again on every load. Of course, if characters aren't in the db, that may mitigate that potentiality, but it doesn't exclude homebrew sets that may be based on BD's data set being broken by a new version of data. The DB would be multi table making use of index referencing, that would allow to retrieve source data for every entry in the various tables of DB. Also characters storing only references to used data, thus characters would be validated on loading, or changes.

List file format:

  • If someone may be editing this by hand, xml gets very verbose, and may not be ideal.
  • JSON and YAML are practically interchangeable, but YAML is probably easier to edit by hand.
  • YAML is probably closer to what we ended up with. I also agree on YAML, which I have used a few times, but forgot to mention in Hipchat discussions.

Maybe something like this. It'll take up more vertical space, but much less horizontal. (I don't fully understand everything in the current LST format, so I may be misstating some of this.) Skills:

 - Appraise:
   KeyStat: INT
   Type:
     - Intelligence
     - Standard
     - Base
     - Appraise
   SourcePage: SkillsI
 - Bluff:
   KeyStat: CHA
   Type:
     - Charisma
     - Standard
     - Base
     - Bluff
   SourcePage: SkillsI
   Bonus:
     - Skill:
       - Sleight of Hand
       - Diplomacy
       - Intimidate
     - SynergyBonus
       - TYPE:Synergy.STACK
     - PreSkill:
       - Bluff: 5

--Tanktalus (talk) 06:10, 3 January 2018 (UTC)