This places the parser in its own submodule so that we can be ready
for the next two or three phases of textual analysis. Right now we
only scan for deliberate references, but the plan is to also scan
for explicit but incidental references, and then to go futher and
go the full tf-idf on the source.
After running 'cargo clippy,' a few changes were made, and then some
were reverted. Honestly, 'x.len() > 0' is WAY more readable than
'!x.is_empty()'. The exclamation mark gets swallowed up by the
surrounding text and is hard to see.
This is good because in used to be somewhat cut-and-paste, and
that's just ugly. There's a lot of commonality between "insert note"
and "update content," since both rely heavily on parsing the content
in order to establish the web of relationships between notes and pages,
so having that algorithm ONCE makes me happier.
This was getting semantically confusing, so I decided to short
circuit the whole mess by separating the two. The results are
promising. It does mean that deleting a note means traversing
two tables to clean out all the cruft, which is *sigh*, but it
also means that the tree is stored in one table and the graph in
another, giving us a much better separation of concerns down at
the SQL layer.
This removes the page/note dichotomy, since it wasn't working
as well as I'd hoped. The discipline required now is higher
where the data store layer is concerned, but the actual structures
are smaller and more efficient.
This is pretty hairy, because we're relying on the LEFT JOIN feature
to give us the root node when we need it. That's kinda ugly, but
it seems to work just fine. It also gives us the list in the
*correct* order, so the only thing we need to do is go to the last
item in the returned vector, make sure it's a root node, then go
fetch the page so we can decorate the list with the *right* root.
We'll pass this as a JSON object { [notes-in-reverse], page }.
Well, as complete as it could be without proper automated testing.
I think there'll be some more testing soon, as it doesn't make sense
for it to hang out so blatantly like this.
Both a fmt and clippy pass have shaken all the lint off, and right
now it builds without warnings or lintings. Wheee!
This features all of the reference types that I commonly use,
including the ORG mode [[Title]], #CamelCase, #lisp-case, and #colon:case.
There are still edge cases around capitalization and the mixing of symbols
and numbers, and I'll have to hack on those until I'm satisfied.
This code now uses the ParentId/NoteId dichotomy supported with
Shrinkwrap. It's actually very nice.
* refs/remotes/origin/reboot-20201004:
FEAT: Move note now works.
This is mostly an exercise to understand the derive_builder pattern.
It required a few tips to get it working, but in the end, it's
actually what I want.
I also learned a lot about how the Executor pattern, the Results<> object,
error mapping, and futures interact in this code. This is going to be
incredibly useful long-term, as long as I still keep this project "live"
in my head.
Since both `insert_page` and `insert_note` need to insert a note,
having that code twice in the same block was annoying, especially
since discovering that my oh-so-clever use of `include_str!`
precludes me from using the `query!` macros, which want the strings
included *before* doing analysis.
All that wrestling with the Transaction type turned out to be much
simpler when I was able to just devolve it into an Executor.
In the great tradition of TPP, this is a win. We've gone through
the test driven development, and there is so much *learning* here:
- tokio::test NEEDS the threaded_schedular feature to report errors correctly
- thiserror can't do enum variants the way I expected
- Different error types for different returns is not kosher
- Serde's configuration NEEDS a type, such as JSON, to work,
- Rust has include_str!(), to embed text in a Rust program from an external source
- SQLX is still a pain, but it's managable.