I'll make an attempt to answer based upon my experience building a similar framework based upon something I did in the past along with some recent work in building something else with a slightly different approach. These are my opinions and your mileage may vary.
So, a TL;DR - I think the way you're going is pretty good. I'd say go granular, use Scripts more than KWT, and divide up your data "smartly". Now, on to the full details...
1) In both my effforts, we went more granular. Basically, each "step" of a test was a different piece of script code with parameters to dictate the salient information for executing the step. We have a framework I'm using now where each row in a data table indicates to execute a full test and that works, but it means that you are maintaining whole tests, not just small pieces. I prefer going granular because in an entire test, it's rarely the full outcome that needs to be maintained but usually some small piece like navigating to a scren, selecting a particular button, etc. Now, I didn't go so far down as having my own generic "ClickButton" methods that took on-screen objects and performed clicks. For example, one project I worked with was an online web store for selling admission tickets to an amusement park. So, one "step" that I coded was "select tickets". The assumption on that step was that you were already navigated to the part of the store for selecting the tickets and the parameters were specific ticket type and quantity. The step found the type on the web page, entered a quantity, and clicked the "add to cart" button. I had similar steps for selecting shipping methods, applying payments, etc. Architecturally, I had two CSV files for each "test run" that I executed. One was simply a listing of the test cases to run (test 1, test 2, test 3, etc). The next was a mroe complete data table that contained the steps for the test cases. Each row had the test case ID, a step descriptor/identifier, and then another column which was a pipe delimited set of data that was parsed into parametres and values to be used for the step. A piece of code created an instance of a class defined for each step and, when the data was read in, created an array of those instances that was then executed as the test case. So, whenever we needd to maintain something, we never needed to maintain a whole test case, just the code for each "step".
2) This is kind of a matter of opinion... but I prefer scripting. KWT are good for someone who isn't familiar with writing code as text on screen with all the syntax and organization, etc. They are pretty powerful as is and can do a lot. But some of the fancy code-gymnastics that I like to do to make code elegant, maintainable, and concise you need script code to do. The framework I described briefly above was written ENTIRELY in script. However, you could easily make each "step" a keyword test that could then be called by name rather than the class instance stuff that I did. If your development staff for test automation is a bunch of decent coders, I'd say go script all the way. If not, you might find a happy medium somewhere where some parts are script and someparts are KWT.
3) As mentioned, you can easily create a test step table like I did with a minimum number of columns. I managed it with 5 columns: Test Step ID, Test Case ID, Test Step Descriptor, Parameter Data, Checkpoint validation. And then another table with 3 columns: Test Case ID, Enable/Disable, Test Case Descriptor. Two data tables and a bunch of code. Another framework I worked on had a few more data tables used for some additional supporting data. For example, if I needed to input contact information into a CRM, the test step table may include, in the parameters column, a cross reference to a data table of contact information to use for input. That way you wouldn't need to have ALL the dat in the same table, you'd just cross reference out.
So...that's my input...